Test Report: KVM_Linux_crio 19700

                    
                      8b226b9d2c09f79dcc3a887682b5a6bd27a95904:2024-09-24:36357
                    
                

Test fail (30/318)

Order failed test Duration
33 TestAddons/parallel/Registry 72.93
34 TestAddons/parallel/Ingress 152.23
36 TestAddons/parallel/MetricsServer 368.3
165 TestMultiControlPlane/serial/StopSecondaryNode 141.26
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.53
167 TestMultiControlPlane/serial/RestartSecondaryNode 6.52
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 368.3
172 TestMultiControlPlane/serial/StopCluster 141.47
232 TestMultiNode/serial/RestartKeepsNodes 323.18
234 TestMultiNode/serial/StopMultiNode 144.54
241 TestPreload 158.84
249 TestKubernetesUpgrade 365.54
321 TestStartStop/group/old-k8s-version/serial/FirstStart 274.63
346 TestStartStop/group/embed-certs/serial/Stop 138.91
349 TestStartStop/group/no-preload/serial/Stop 138.95
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.15
353 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
354 TestStartStop/group/old-k8s-version/serial/DeployApp 0.46
355 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 95.68
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
363 TestStartStop/group/old-k8s-version/serial/SecondStart 740.12
364 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.09
365 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.1
366 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.08
367 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.28
368 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 425.2
369 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 428.59
370 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 359.17
371 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 147.7
x
+
TestAddons/parallel/Registry (72.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 6.78628ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I0924 18:30:26.706797   10949 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0924 18:30:26.706845   10949 kapi.go:107] duration metric: took 7.24964ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "registry-66c9cd494c-b94p9" [bb39eff0-510f-4e28-b3b7-a246e7ca880c] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003049975s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wpjp5" [e715cd68-83d0-4850-abc2-b9a3f139e6f8] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00406708s
addons_test.go:338: (dbg) Run:  kubectl --context addons-218885 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-218885 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-218885 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.080992687s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-218885 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-218885 ip
2024/09/24 18:31:36 [DEBUG] GET http://192.168.39.215:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-218885 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-218885 -n addons-218885
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-218885 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-218885 logs -n 25: (1.289497869s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-366438 | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC |                     |
	|         | -p download-only-366438                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| delete  | -p download-only-366438                                                                     | download-only-366438 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| start   | -o=json --download-only                                                                     | download-only-880989 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | -p download-only-880989                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| delete  | -p download-only-880989                                                                     | download-only-880989 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| delete  | -p download-only-366438                                                                     | download-only-366438 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| delete  | -p download-only-880989                                                                     | download-only-880989 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-303583 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | binary-mirror-303583                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40655                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-303583                                                                     | binary-mirror-303583 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| addons  | enable dashboard -p                                                                         | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | addons-218885                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | addons-218885                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-218885 --wait=true                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:22 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-218885 addons disable                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:30 UTC | 24 Sep 24 18:30 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:30 UTC | 24 Sep 24 18:30 UTC |
	|         | addons-218885                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-218885 ssh cat                                                                       | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | /opt/local-path-provisioner/pvc-32fb6863-7fde-481e-85f8-da616d5f9350_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-218885 addons disable                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-218885 addons                                                                        | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-218885 addons                                                                        | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | -p addons-218885                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-218885 ssh curl -s                                                                   | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-218885 ip                                                                            | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	| addons  | addons-218885 addons disable                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:20:12
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:20:12.325736   11602 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:20:12.325986   11602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:20:12.325997   11602 out.go:358] Setting ErrFile to fd 2...
	I0924 18:20:12.326003   11602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:20:12.326193   11602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 18:20:12.326790   11602 out.go:352] Setting JSON to false
	I0924 18:20:12.327640   11602 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":163,"bootTime":1727201849,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:20:12.327726   11602 start.go:139] virtualization: kvm guest
	I0924 18:20:12.329631   11602 out.go:177] * [addons-218885] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 18:20:12.331012   11602 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:20:12.331079   11602 notify.go:220] Checking for updates...
	I0924 18:20:12.333440   11602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:20:12.334628   11602 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:20:12.335823   11602 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:20:12.337065   11602 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 18:20:12.338153   11602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:20:12.339404   11602 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:20:12.370285   11602 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 18:20:12.371583   11602 start.go:297] selected driver: kvm2
	I0924 18:20:12.371597   11602 start.go:901] validating driver "kvm2" against <nil>
	I0924 18:20:12.371608   11602 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:20:12.372940   11602 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:20:12.373043   11602 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 18:20:12.393549   11602 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 18:20:12.393593   11602 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 18:20:12.393793   11602 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:20:12.393823   11602 cni.go:84] Creating CNI manager for ""
	I0924 18:20:12.393846   11602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 18:20:12.393854   11602 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 18:20:12.393894   11602 start.go:340] cluster config:
	{Name:addons-218885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-218885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:20:12.393973   11602 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:20:12.395768   11602 out.go:177] * Starting "addons-218885" primary control-plane node in "addons-218885" cluster
	I0924 18:20:12.396963   11602 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:20:12.396994   11602 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 18:20:12.397002   11602 cache.go:56] Caching tarball of preloaded images
	I0924 18:20:12.397076   11602 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 18:20:12.397086   11602 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 18:20:12.397361   11602 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/config.json ...
	I0924 18:20:12.397381   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/config.json: {Name:mk8ae020c4167ae6b07f3b581ad7b941f00493e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:12.397501   11602 start.go:360] acquireMachinesLock for addons-218885: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 18:20:12.397544   11602 start.go:364] duration metric: took 30.473µs to acquireMachinesLock for "addons-218885"
	I0924 18:20:12.397560   11602 start.go:93] Provisioning new machine with config: &{Name:addons-218885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-218885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:20:12.397621   11602 start.go:125] createHost starting for "" (driver="kvm2")
	I0924 18:20:12.399224   11602 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0924 18:20:12.399337   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:20:12.399361   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:20:12.413485   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I0924 18:20:12.413984   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:20:12.414522   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:20:12.414543   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:20:12.414994   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:20:12.415195   11602 main.go:141] libmachine: (addons-218885) Calling .GetMachineName
	I0924 18:20:12.415361   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:12.415550   11602 start.go:159] libmachine.API.Create for "addons-218885" (driver="kvm2")
	I0924 18:20:12.415574   11602 client.go:168] LocalClient.Create starting
	I0924 18:20:12.415623   11602 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem
	I0924 18:20:12.521230   11602 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem
	I0924 18:20:12.771341   11602 main.go:141] libmachine: Running pre-create checks...
	I0924 18:20:12.771362   11602 main.go:141] libmachine: (addons-218885) Calling .PreCreateCheck
	I0924 18:20:12.771809   11602 main.go:141] libmachine: (addons-218885) Calling .GetConfigRaw
	I0924 18:20:12.772210   11602 main.go:141] libmachine: Creating machine...
	I0924 18:20:12.772225   11602 main.go:141] libmachine: (addons-218885) Calling .Create
	I0924 18:20:12.772358   11602 main.go:141] libmachine: (addons-218885) Creating KVM machine...
	I0924 18:20:12.773495   11602 main.go:141] libmachine: (addons-218885) DBG | found existing default KVM network
	I0924 18:20:12.774264   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:12.774133   11624 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0924 18:20:12.774303   11602 main.go:141] libmachine: (addons-218885) DBG | created network xml: 
	I0924 18:20:12.774319   11602 main.go:141] libmachine: (addons-218885) DBG | <network>
	I0924 18:20:12.774325   11602 main.go:141] libmachine: (addons-218885) DBG |   <name>mk-addons-218885</name>
	I0924 18:20:12.774334   11602 main.go:141] libmachine: (addons-218885) DBG |   <dns enable='no'/>
	I0924 18:20:12.774360   11602 main.go:141] libmachine: (addons-218885) DBG |   
	I0924 18:20:12.774381   11602 main.go:141] libmachine: (addons-218885) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0924 18:20:12.774452   11602 main.go:141] libmachine: (addons-218885) DBG |     <dhcp>
	I0924 18:20:12.774493   11602 main.go:141] libmachine: (addons-218885) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0924 18:20:12.774513   11602 main.go:141] libmachine: (addons-218885) DBG |     </dhcp>
	I0924 18:20:12.774524   11602 main.go:141] libmachine: (addons-218885) DBG |   </ip>
	I0924 18:20:12.774536   11602 main.go:141] libmachine: (addons-218885) DBG |   
	I0924 18:20:12.774546   11602 main.go:141] libmachine: (addons-218885) DBG | </network>
	I0924 18:20:12.774569   11602 main.go:141] libmachine: (addons-218885) DBG | 
	I0924 18:20:12.779356   11602 main.go:141] libmachine: (addons-218885) DBG | trying to create private KVM network mk-addons-218885 192.168.39.0/24...
	I0924 18:20:12.840345   11602 main.go:141] libmachine: (addons-218885) DBG | private KVM network mk-addons-218885 192.168.39.0/24 created
	I0924 18:20:12.840381   11602 main.go:141] libmachine: (addons-218885) Setting up store path in /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885 ...
	I0924 18:20:12.840394   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:12.840325   11624 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:20:12.840402   11602 main.go:141] libmachine: (addons-218885) Building disk image from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 18:20:12.840503   11602 main.go:141] libmachine: (addons-218885) Downloading /home/jenkins/minikube-integration/19700-3751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 18:20:13.080883   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:13.080784   11624 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa...
	I0924 18:20:13.196783   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:13.196657   11624 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/addons-218885.rawdisk...
	I0924 18:20:13.196813   11602 main.go:141] libmachine: (addons-218885) DBG | Writing magic tar header
	I0924 18:20:13.196826   11602 main.go:141] libmachine: (addons-218885) DBG | Writing SSH key tar header
	I0924 18:20:13.196836   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:13.196759   11624 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885 ...
	I0924 18:20:13.196852   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885
	I0924 18:20:13.196869   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines
	I0924 18:20:13.196911   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885 (perms=drwx------)
	I0924 18:20:13.196926   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:20:13.196942   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751
	I0924 18:20:13.196954   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 18:20:13.196965   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins
	I0924 18:20:13.196984   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines (perms=drwxr-xr-x)
	I0924 18:20:13.196995   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home
	I0924 18:20:13.197007   11602 main.go:141] libmachine: (addons-218885) DBG | Skipping /home - not owner
	I0924 18:20:13.197025   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube (perms=drwxr-xr-x)
	I0924 18:20:13.197038   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751 (perms=drwxrwxr-x)
	I0924 18:20:13.197053   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 18:20:13.197070   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 18:20:13.197083   11602 main.go:141] libmachine: (addons-218885) Creating domain...
	I0924 18:20:13.198004   11602 main.go:141] libmachine: (addons-218885) define libvirt domain using xml: 
	I0924 18:20:13.198029   11602 main.go:141] libmachine: (addons-218885) <domain type='kvm'>
	I0924 18:20:13.198041   11602 main.go:141] libmachine: (addons-218885)   <name>addons-218885</name>
	I0924 18:20:13.198049   11602 main.go:141] libmachine: (addons-218885)   <memory unit='MiB'>4000</memory>
	I0924 18:20:13.198059   11602 main.go:141] libmachine: (addons-218885)   <vcpu>2</vcpu>
	I0924 18:20:13.198066   11602 main.go:141] libmachine: (addons-218885)   <features>
	I0924 18:20:13.198071   11602 main.go:141] libmachine: (addons-218885)     <acpi/>
	I0924 18:20:13.198077   11602 main.go:141] libmachine: (addons-218885)     <apic/>
	I0924 18:20:13.198085   11602 main.go:141] libmachine: (addons-218885)     <pae/>
	I0924 18:20:13.198092   11602 main.go:141] libmachine: (addons-218885)     
	I0924 18:20:13.198097   11602 main.go:141] libmachine: (addons-218885)   </features>
	I0924 18:20:13.198104   11602 main.go:141] libmachine: (addons-218885)   <cpu mode='host-passthrough'>
	I0924 18:20:13.198109   11602 main.go:141] libmachine: (addons-218885)   
	I0924 18:20:13.198116   11602 main.go:141] libmachine: (addons-218885)   </cpu>
	I0924 18:20:13.198121   11602 main.go:141] libmachine: (addons-218885)   <os>
	I0924 18:20:13.198129   11602 main.go:141] libmachine: (addons-218885)     <type>hvm</type>
	I0924 18:20:13.198135   11602 main.go:141] libmachine: (addons-218885)     <boot dev='cdrom'/>
	I0924 18:20:13.198140   11602 main.go:141] libmachine: (addons-218885)     <boot dev='hd'/>
	I0924 18:20:13.198167   11602 main.go:141] libmachine: (addons-218885)     <bootmenu enable='no'/>
	I0924 18:20:13.198188   11602 main.go:141] libmachine: (addons-218885)   </os>
	I0924 18:20:13.198200   11602 main.go:141] libmachine: (addons-218885)   <devices>
	I0924 18:20:13.198211   11602 main.go:141] libmachine: (addons-218885)     <disk type='file' device='cdrom'>
	I0924 18:20:13.198226   11602 main.go:141] libmachine: (addons-218885)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/boot2docker.iso'/>
	I0924 18:20:13.198237   11602 main.go:141] libmachine: (addons-218885)       <target dev='hdc' bus='scsi'/>
	I0924 18:20:13.198247   11602 main.go:141] libmachine: (addons-218885)       <readonly/>
	I0924 18:20:13.198257   11602 main.go:141] libmachine: (addons-218885)     </disk>
	I0924 18:20:13.198267   11602 main.go:141] libmachine: (addons-218885)     <disk type='file' device='disk'>
	I0924 18:20:13.198282   11602 main.go:141] libmachine: (addons-218885)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 18:20:13.198296   11602 main.go:141] libmachine: (addons-218885)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/addons-218885.rawdisk'/>
	I0924 18:20:13.198308   11602 main.go:141] libmachine: (addons-218885)       <target dev='hda' bus='virtio'/>
	I0924 18:20:13.198316   11602 main.go:141] libmachine: (addons-218885)     </disk>
	I0924 18:20:13.198328   11602 main.go:141] libmachine: (addons-218885)     <interface type='network'>
	I0924 18:20:13.198339   11602 main.go:141] libmachine: (addons-218885)       <source network='mk-addons-218885'/>
	I0924 18:20:13.198352   11602 main.go:141] libmachine: (addons-218885)       <model type='virtio'/>
	I0924 18:20:13.198367   11602 main.go:141] libmachine: (addons-218885)     </interface>
	I0924 18:20:13.198380   11602 main.go:141] libmachine: (addons-218885)     <interface type='network'>
	I0924 18:20:13.198390   11602 main.go:141] libmachine: (addons-218885)       <source network='default'/>
	I0924 18:20:13.198398   11602 main.go:141] libmachine: (addons-218885)       <model type='virtio'/>
	I0924 18:20:13.198407   11602 main.go:141] libmachine: (addons-218885)     </interface>
	I0924 18:20:13.198418   11602 main.go:141] libmachine: (addons-218885)     <serial type='pty'>
	I0924 18:20:13.198427   11602 main.go:141] libmachine: (addons-218885)       <target port='0'/>
	I0924 18:20:13.198462   11602 main.go:141] libmachine: (addons-218885)     </serial>
	I0924 18:20:13.198485   11602 main.go:141] libmachine: (addons-218885)     <console type='pty'>
	I0924 18:20:13.198491   11602 main.go:141] libmachine: (addons-218885)       <target type='serial' port='0'/>
	I0924 18:20:13.198499   11602 main.go:141] libmachine: (addons-218885)     </console>
	I0924 18:20:13.198504   11602 main.go:141] libmachine: (addons-218885)     <rng model='virtio'>
	I0924 18:20:13.198513   11602 main.go:141] libmachine: (addons-218885)       <backend model='random'>/dev/random</backend>
	I0924 18:20:13.198518   11602 main.go:141] libmachine: (addons-218885)     </rng>
	I0924 18:20:13.198522   11602 main.go:141] libmachine: (addons-218885)     
	I0924 18:20:13.198527   11602 main.go:141] libmachine: (addons-218885)     
	I0924 18:20:13.198533   11602 main.go:141] libmachine: (addons-218885)   </devices>
	I0924 18:20:13.198538   11602 main.go:141] libmachine: (addons-218885) </domain>
	I0924 18:20:13.198542   11602 main.go:141] libmachine: (addons-218885) 
	I0924 18:20:13.204102   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:cf:a6:03 in network default
	I0924 18:20:13.204625   11602 main.go:141] libmachine: (addons-218885) Ensuring networks are active...
	I0924 18:20:13.204646   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:13.205345   11602 main.go:141] libmachine: (addons-218885) Ensuring network default is active
	I0924 18:20:13.205671   11602 main.go:141] libmachine: (addons-218885) Ensuring network mk-addons-218885 is active
	I0924 18:20:13.207039   11602 main.go:141] libmachine: (addons-218885) Getting domain xml...
	I0924 18:20:13.207785   11602 main.go:141] libmachine: (addons-218885) Creating domain...
	I0924 18:20:14.575302   11602 main.go:141] libmachine: (addons-218885) Waiting to get IP...
	I0924 18:20:14.575964   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:14.576313   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:14.576343   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:14.576303   11624 retry.go:31] will retry after 274.373447ms: waiting for machine to come up
	I0924 18:20:14.852639   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:14.852971   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:14.852999   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:14.852930   11624 retry.go:31] will retry after 320.247846ms: waiting for machine to come up
	I0924 18:20:15.174341   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:15.174769   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:15.174795   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:15.174721   11624 retry.go:31] will retry after 480.520038ms: waiting for machine to come up
	I0924 18:20:15.656403   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:15.656812   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:15.656838   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:15.656779   11624 retry.go:31] will retry after 445.239578ms: waiting for machine to come up
	I0924 18:20:16.103322   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:16.103649   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:16.103675   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:16.103614   11624 retry.go:31] will retry after 512.464509ms: waiting for machine to come up
	I0924 18:20:16.617221   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:16.617724   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:16.617760   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:16.617646   11624 retry.go:31] will retry after 857.414245ms: waiting for machine to come up
	I0924 18:20:17.477266   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:17.477652   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:17.477673   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:17.477626   11624 retry.go:31] will retry after 806.166754ms: waiting for machine to come up
	I0924 18:20:18.285640   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:18.286077   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:18.286100   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:18.286052   11624 retry.go:31] will retry after 1.16238491s: waiting for machine to come up
	I0924 18:20:19.450511   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:19.450884   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:19.450904   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:19.450866   11624 retry.go:31] will retry after 1.335718023s: waiting for machine to come up
	I0924 18:20:20.788441   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:20.788913   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:20.788943   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:20.788872   11624 retry.go:31] will retry after 1.799499594s: waiting for machine to come up
	I0924 18:20:22.589666   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:22.590013   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:22.590062   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:22.589996   11624 retry.go:31] will retry after 1.859729205s: waiting for machine to come up
	I0924 18:20:24.452908   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:24.453276   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:24.453302   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:24.453236   11624 retry.go:31] will retry after 2.767497543s: waiting for machine to come up
	I0924 18:20:27.223890   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:27.224340   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:27.224362   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:27.224297   11624 retry.go:31] will retry after 4.46492502s: waiting for machine to come up
	I0924 18:20:31.694510   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:31.694968   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:31.694990   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:31.694927   11624 retry.go:31] will retry after 4.457689137s: waiting for machine to come up
	I0924 18:20:36.156477   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:36.157022   11602 main.go:141] libmachine: (addons-218885) Found IP for machine: 192.168.39.215
	I0924 18:20:36.157042   11602 main.go:141] libmachine: (addons-218885) Reserving static IP address...
	I0924 18:20:36.157083   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has current primary IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:36.157396   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find host DHCP lease matching {name: "addons-218885", mac: "52:54:00:4f:2a:e2", ip: "192.168.39.215"} in network mk-addons-218885
	I0924 18:20:36.229161   11602 main.go:141] libmachine: (addons-218885) DBG | Getting to WaitForSSH function...
	I0924 18:20:36.229194   11602 main.go:141] libmachine: (addons-218885) Reserved static IP address: 192.168.39.215
	I0924 18:20:36.229207   11602 main.go:141] libmachine: (addons-218885) Waiting for SSH to be available...
	I0924 18:20:36.231373   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:36.231611   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885
	I0924 18:20:36.231644   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find defined IP address of network mk-addons-218885 interface with MAC address 52:54:00:4f:2a:e2
	I0924 18:20:36.231777   11602 main.go:141] libmachine: (addons-218885) DBG | Using SSH client type: external
	I0924 18:20:36.231800   11602 main.go:141] libmachine: (addons-218885) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa (-rw-------)
	I0924 18:20:36.231882   11602 main.go:141] libmachine: (addons-218885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:20:36.231906   11602 main.go:141] libmachine: (addons-218885) DBG | About to run SSH command:
	I0924 18:20:36.231920   11602 main.go:141] libmachine: (addons-218885) DBG | exit 0
	I0924 18:20:36.243616   11602 main.go:141] libmachine: (addons-218885) DBG | SSH cmd err, output: exit status 255: 
	I0924 18:20:36.243646   11602 main.go:141] libmachine: (addons-218885) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0924 18:20:36.243654   11602 main.go:141] libmachine: (addons-218885) DBG | command : exit 0
	I0924 18:20:36.243658   11602 main.go:141] libmachine: (addons-218885) DBG | err     : exit status 255
	I0924 18:20:36.243667   11602 main.go:141] libmachine: (addons-218885) DBG | output  : 
	I0924 18:20:39.245429   11602 main.go:141] libmachine: (addons-218885) DBG | Getting to WaitForSSH function...
	I0924 18:20:39.247941   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.248310   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.248361   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.248472   11602 main.go:141] libmachine: (addons-218885) DBG | Using SSH client type: external
	I0924 18:20:39.248497   11602 main.go:141] libmachine: (addons-218885) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa (-rw-------)
	I0924 18:20:39.248544   11602 main.go:141] libmachine: (addons-218885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:20:39.248581   11602 main.go:141] libmachine: (addons-218885) DBG | About to run SSH command:
	I0924 18:20:39.248599   11602 main.go:141] libmachine: (addons-218885) DBG | exit 0
	I0924 18:20:39.370720   11602 main.go:141] libmachine: (addons-218885) DBG | SSH cmd err, output: <nil>: 
	I0924 18:20:39.371024   11602 main.go:141] libmachine: (addons-218885) KVM machine creation complete!
	I0924 18:20:39.371383   11602 main.go:141] libmachine: (addons-218885) Calling .GetConfigRaw
	I0924 18:20:39.371926   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:39.372115   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:39.372292   11602 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 18:20:39.372308   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:20:39.373716   11602 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 18:20:39.373728   11602 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 18:20:39.373737   11602 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 18:20:39.373742   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:39.375983   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.376314   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.376342   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.376467   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:39.376746   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.376896   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.377041   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:39.377176   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:39.377355   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:39.377366   11602 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 18:20:39.474162   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:20:39.474185   11602 main.go:141] libmachine: Detecting the provisioner...
	I0924 18:20:39.474192   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:39.476622   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.477004   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.477030   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.477220   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:39.477426   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.477578   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.477699   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:39.477853   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:39.478018   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:39.478028   11602 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 18:20:39.575513   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 18:20:39.575630   11602 main.go:141] libmachine: found compatible host: buildroot
	I0924 18:20:39.575647   11602 main.go:141] libmachine: Provisioning with buildroot...
	I0924 18:20:39.575659   11602 main.go:141] libmachine: (addons-218885) Calling .GetMachineName
	I0924 18:20:39.575913   11602 buildroot.go:166] provisioning hostname "addons-218885"
	I0924 18:20:39.575936   11602 main.go:141] libmachine: (addons-218885) Calling .GetMachineName
	I0924 18:20:39.576144   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:39.578676   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.579102   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.579128   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.579285   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:39.579467   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.579584   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.579717   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:39.579893   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:39.580094   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:39.580111   11602 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-218885 && echo "addons-218885" | sudo tee /etc/hostname
	I0924 18:20:39.692677   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-218885
	
	I0924 18:20:39.692711   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:39.695685   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.696027   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.696057   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.696220   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:39.696411   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.696598   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.696757   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:39.696917   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:39.697115   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:39.697138   11602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-218885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-218885/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-218885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 18:20:39.803035   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:20:39.803068   11602 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 18:20:39.803143   11602 buildroot.go:174] setting up certificates
	I0924 18:20:39.803160   11602 provision.go:84] configureAuth start
	I0924 18:20:39.803180   11602 main.go:141] libmachine: (addons-218885) Calling .GetMachineName
	I0924 18:20:39.803472   11602 main.go:141] libmachine: (addons-218885) Calling .GetIP
	I0924 18:20:39.806086   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.806371   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.806397   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.806540   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:39.808868   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.809212   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.809237   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.809404   11602 provision.go:143] copyHostCerts
	I0924 18:20:39.809469   11602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 18:20:39.809588   11602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 18:20:39.809648   11602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 18:20:39.809697   11602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.addons-218885 san=[127.0.0.1 192.168.39.215 addons-218885 localhost minikube]
	I0924 18:20:40.082244   11602 provision.go:177] copyRemoteCerts
	I0924 18:20:40.082308   11602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 18:20:40.082332   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.085171   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.085563   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.085591   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.085797   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.085983   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.086103   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.086224   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:20:40.165135   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 18:20:40.192252   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 18:20:40.219501   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 18:20:40.246264   11602 provision.go:87] duration metric: took 443.085344ms to configureAuth
	I0924 18:20:40.246293   11602 buildroot.go:189] setting minikube options for container-runtime
	I0924 18:20:40.246484   11602 config.go:182] Loaded profile config "addons-218885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:20:40.246570   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.249244   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.249629   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.249653   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.249818   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.250018   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.250175   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.250308   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.250488   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:40.250644   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:40.250658   11602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 18:20:40.468815   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 18:20:40.468854   11602 main.go:141] libmachine: Checking connection to Docker...
	I0924 18:20:40.468866   11602 main.go:141] libmachine: (addons-218885) Calling .GetURL
	I0924 18:20:40.470093   11602 main.go:141] libmachine: (addons-218885) DBG | Using libvirt version 6000000
	I0924 18:20:40.472092   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.472382   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.472406   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.472571   11602 main.go:141] libmachine: Docker is up and running!
	I0924 18:20:40.472589   11602 main.go:141] libmachine: Reticulating splines...
	I0924 18:20:40.472597   11602 client.go:171] duration metric: took 28.057014034s to LocalClient.Create
	I0924 18:20:40.472624   11602 start.go:167] duration metric: took 28.057073554s to libmachine.API.Create "addons-218885"
	I0924 18:20:40.472634   11602 start.go:293] postStartSetup for "addons-218885" (driver="kvm2")
	I0924 18:20:40.472648   11602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:20:40.472666   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:40.472877   11602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:20:40.472906   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.475196   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.475548   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.475575   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.475695   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.475855   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.476016   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.476154   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:20:40.552548   11602 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 18:20:40.556457   11602 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 18:20:40.556481   11602 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 18:20:40.556558   11602 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 18:20:40.556592   11602 start.go:296] duration metric: took 83.950837ms for postStartSetup
	I0924 18:20:40.556636   11602 main.go:141] libmachine: (addons-218885) Calling .GetConfigRaw
	I0924 18:20:40.557160   11602 main.go:141] libmachine: (addons-218885) Calling .GetIP
	I0924 18:20:40.559791   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.560070   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.560094   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.560299   11602 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/config.json ...
	I0924 18:20:40.560458   11602 start.go:128] duration metric: took 28.162828516s to createHost
	I0924 18:20:40.560481   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.562477   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.562977   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.563007   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.563174   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.563321   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.563475   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.563572   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.563723   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:40.563885   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:40.563895   11602 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 18:20:40.659437   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727202040.641796120
	
	I0924 18:20:40.659459   11602 fix.go:216] guest clock: 1727202040.641796120
	I0924 18:20:40.659466   11602 fix.go:229] Guest: 2024-09-24 18:20:40.64179612 +0000 UTC Remote: 2024-09-24 18:20:40.560467466 +0000 UTC m=+28.266972018 (delta=81.328654ms)
	I0924 18:20:40.659526   11602 fix.go:200] guest clock delta is within tolerance: 81.328654ms
	I0924 18:20:40.659536   11602 start.go:83] releasing machines lock for "addons-218885", held for 28.261982282s
	I0924 18:20:40.659570   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:40.659802   11602 main.go:141] libmachine: (addons-218885) Calling .GetIP
	I0924 18:20:40.662293   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.662595   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.662623   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.662765   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:40.663205   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:40.663369   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:40.663431   11602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 18:20:40.663474   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.663578   11602 ssh_runner.go:195] Run: cat /version.json
	I0924 18:20:40.663600   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.666017   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.666043   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.666366   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.666401   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.666427   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.666442   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.666568   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.666579   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.666726   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.666735   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.666891   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.666925   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.667053   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:20:40.667063   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:20:40.762590   11602 ssh_runner.go:195] Run: systemctl --version
	I0924 18:20:40.768558   11602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 18:20:40.923618   11602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 18:20:40.929415   11602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 18:20:40.929483   11602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:20:40.944982   11602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 18:20:40.945009   11602 start.go:495] detecting cgroup driver to use...
	I0924 18:20:40.945091   11602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 18:20:40.960695   11602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 18:20:40.974660   11602 docker.go:217] disabling cri-docker service (if available) ...
	I0924 18:20:40.974712   11602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 18:20:40.988081   11602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 18:20:41.001845   11602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 18:20:41.116471   11602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 18:20:41.278206   11602 docker.go:233] disabling docker service ...
	I0924 18:20:41.278282   11602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 18:20:41.292340   11602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 18:20:41.304936   11602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 18:20:41.427259   11602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 18:20:41.556695   11602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 18:20:41.569928   11602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:20:41.587343   11602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 18:20:41.587395   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.597357   11602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 18:20:41.597420   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.607453   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.617617   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.627570   11602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:20:41.637701   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.647609   11602 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.663924   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.674020   11602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:20:41.683135   11602 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 18:20:41.683188   11602 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 18:20:41.696102   11602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:20:41.705462   11602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:20:41.823495   11602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 18:20:41.913369   11602 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 18:20:41.913456   11602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 18:20:41.918292   11602 start.go:563] Will wait 60s for crictl version
	I0924 18:20:41.918361   11602 ssh_runner.go:195] Run: which crictl
	I0924 18:20:41.921901   11602 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 18:20:41.958038   11602 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 18:20:41.958153   11602 ssh_runner.go:195] Run: crio --version
	I0924 18:20:41.985269   11602 ssh_runner.go:195] Run: crio --version
	I0924 18:20:42.014805   11602 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 18:20:42.016093   11602 main.go:141] libmachine: (addons-218885) Calling .GetIP
	I0924 18:20:42.018614   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:42.019098   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:42.019139   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:42.019258   11602 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 18:20:42.022974   11602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:20:42.034408   11602 kubeadm.go:883] updating cluster {Name:addons-218885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-218885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 18:20:42.034513   11602 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:20:42.034569   11602 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:20:42.064250   11602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 18:20:42.064317   11602 ssh_runner.go:195] Run: which lz4
	I0924 18:20:42.068235   11602 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 18:20:42.072127   11602 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 18:20:42.072165   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 18:20:43.181256   11602 crio.go:462] duration metric: took 1.11306138s to copy over tarball
	I0924 18:20:43.181321   11602 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 18:20:45.254978   11602 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.073631711s)
	I0924 18:20:45.255003   11602 crio.go:469] duration metric: took 2.07372259s to extract the tarball
	I0924 18:20:45.255011   11602 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 18:20:45.291605   11602 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:20:45.334151   11602 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 18:20:45.334171   11602 cache_images.go:84] Images are preloaded, skipping loading
	I0924 18:20:45.334179   11602 kubeadm.go:934] updating node { 192.168.39.215 8443 v1.31.1 crio true true} ...
	I0924 18:20:45.334266   11602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-218885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-218885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 18:20:45.334326   11602 ssh_runner.go:195] Run: crio config
	I0924 18:20:45.379706   11602 cni.go:84] Creating CNI manager for ""
	I0924 18:20:45.379729   11602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 18:20:45.379738   11602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 18:20:45.379759   11602 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-218885 NodeName:addons-218885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 18:20:45.379870   11602 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-218885"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 18:20:45.379931   11602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:20:45.389532   11602 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 18:20:45.389607   11602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 18:20:45.398734   11602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0924 18:20:45.414812   11602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:20:45.430737   11602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0924 18:20:45.447185   11602 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I0924 18:20:45.451002   11602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:20:45.463061   11602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:20:45.578185   11602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:20:45.595455   11602 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885 for IP: 192.168.39.215
	I0924 18:20:45.595478   11602 certs.go:194] generating shared ca certs ...
	I0924 18:20:45.595493   11602 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:45.595628   11602 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 18:20:45.693821   11602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt ...
	I0924 18:20:45.693849   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt: {Name:mk739c8ca5d31150a754381b18341274a55f3194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:45.694000   11602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key ...
	I0924 18:20:45.694011   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key: {Name:mk41697d54972101e4b583bdb12adb625c8a2ce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:45.694084   11602 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 18:20:45.949465   11602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt ...
	I0924 18:20:45.949495   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt: {Name:mk6c99d30fd3bd72ef67c33fc7a8ad8032d9e547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:45.949649   11602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key ...
	I0924 18:20:45.949659   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key: {Name:mk4a9ced92c9b128cb0109242c1c85bc6095111a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:45.949724   11602 certs.go:256] generating profile certs ...
	I0924 18:20:45.949773   11602 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.key
	I0924 18:20:45.949788   11602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt with IP's: []
	I0924 18:20:46.111748   11602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt ...
	I0924 18:20:46.111780   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: {Name:mkcda67505a1d19822a9bd6aa070be1298e2b766 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.111931   11602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.key ...
	I0924 18:20:46.111941   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.key: {Name:mk7ff22fb920d31c4caef16f50e62ca111cf8f23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.112006   11602 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key.5418caf9
	I0924 18:20:46.112025   11602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt.5418caf9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215]
	I0924 18:20:46.368887   11602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt.5418caf9 ...
	I0924 18:20:46.368928   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt.5418caf9: {Name:mk3ea14ef69c0bf68f59451ed6ddde96239c0b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.369111   11602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key.5418caf9 ...
	I0924 18:20:46.369127   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key.5418caf9: {Name:mk094871a112eec146c05c29dae97b6b80490a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.369227   11602 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt.5418caf9 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt
	I0924 18:20:46.369341   11602 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key.5418caf9 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key
	I0924 18:20:46.369416   11602 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.key
	I0924 18:20:46.369442   11602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.crt with IP's: []
	I0924 18:20:46.475111   11602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.crt ...
	I0924 18:20:46.475146   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.crt: {Name:mk14e8d60731076f4aeed39447637ad04acbd93f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.475328   11602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.key ...
	I0924 18:20:46.475341   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.key: {Name:mk1261b7340504044d617837647a0294e6e60c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.475529   11602 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 18:20:46.475574   11602 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 18:20:46.475609   11602 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:20:46.475644   11602 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 18:20:46.476210   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:20:46.510341   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 18:20:46.534245   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:20:46.573657   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 18:20:46.597284   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0924 18:20:46.619923   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 18:20:46.643112   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:20:46.666301   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 18:20:46.689259   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:20:46.712125   11602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 18:20:46.728579   11602 ssh_runner.go:195] Run: openssl version
	I0924 18:20:46.734238   11602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:20:46.744739   11602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:46.749263   11602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:46.749321   11602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:46.755061   11602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:20:46.765777   11602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:20:46.770113   11602 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 18:20:46.770173   11602 kubeadm.go:392] StartCluster: {Name:addons-218885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-218885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:20:46.770261   11602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 18:20:46.770309   11602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 18:20:46.805114   11602 cri.go:89] found id: ""
	I0924 18:20:46.805195   11602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 18:20:46.816665   11602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 18:20:46.826242   11602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 18:20:46.835662   11602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 18:20:46.835682   11602 kubeadm.go:157] found existing configuration files:
	
	I0924 18:20:46.835732   11602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 18:20:46.844574   11602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 18:20:46.844639   11602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 18:20:46.853707   11602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 18:20:46.862302   11602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 18:20:46.862358   11602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 18:20:46.871498   11602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 18:20:46.880100   11602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 18:20:46.880165   11602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 18:20:46.889113   11602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 18:20:46.898369   11602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 18:20:46.898428   11602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 18:20:46.907411   11602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 18:20:46.952940   11602 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 18:20:46.953015   11602 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 18:20:47.040390   11602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 18:20:47.040491   11602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 18:20:47.040607   11602 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 18:20:47.049167   11602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 18:20:47.050888   11602 out.go:235]   - Generating certificates and keys ...
	I0924 18:20:47.050961   11602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 18:20:47.051052   11602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 18:20:47.131678   11602 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 18:20:47.547895   11602 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 18:20:47.601285   11602 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 18:20:47.832128   11602 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 18:20:48.031950   11602 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 18:20:48.032124   11602 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-218885 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0924 18:20:48.210630   11602 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 18:20:48.210816   11602 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-218885 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0924 18:20:48.300960   11602 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 18:20:48.605685   11602 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 18:20:48.809001   11602 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 18:20:48.809097   11602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 18:20:49.163476   11602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 18:20:49.371134   11602 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 18:20:49.529427   11602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 18:20:49.721235   11602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 18:20:49.836924   11602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 18:20:49.837300   11602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 18:20:49.839677   11602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 18:20:49.841378   11602 out.go:235]   - Booting up control plane ...
	I0924 18:20:49.841496   11602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 18:20:49.841559   11602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 18:20:49.841618   11602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 18:20:49.858387   11602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 18:20:49.866657   11602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 18:20:49.866723   11602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 18:20:49.987294   11602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 18:20:49.987476   11602 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 18:20:50.488576   11602 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.853577ms
	I0924 18:20:50.488656   11602 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 18:20:55.489419   11602 kubeadm.go:310] [api-check] The API server is healthy after 5.002843483s
	I0924 18:20:55.501919   11602 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 18:20:55.515354   11602 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 18:20:55.545511   11602 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 18:20:55.545740   11602 kubeadm.go:310] [mark-control-plane] Marking the node addons-218885 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 18:20:55.558654   11602 kubeadm.go:310] [bootstrap-token] Using token: wfmddn.jqm9ftj1c9z5a6vs
	I0924 18:20:55.560273   11602 out.go:235]   - Configuring RBAC rules ...
	I0924 18:20:55.560435   11602 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 18:20:55.568873   11602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 18:20:55.578532   11602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 18:20:55.582388   11602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 18:20:55.586382   11602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 18:20:55.593349   11602 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 18:20:55.897630   11602 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 18:20:56.326166   11602 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 18:20:56.895415   11602 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 18:20:56.896193   11602 kubeadm.go:310] 
	I0924 18:20:56.896289   11602 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 18:20:56.896301   11602 kubeadm.go:310] 
	I0924 18:20:56.896422   11602 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 18:20:56.896443   11602 kubeadm.go:310] 
	I0924 18:20:56.896479   11602 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 18:20:56.896571   11602 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 18:20:56.896662   11602 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 18:20:56.896677   11602 kubeadm.go:310] 
	I0924 18:20:56.896760   11602 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 18:20:56.896768   11602 kubeadm.go:310] 
	I0924 18:20:56.896837   11602 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 18:20:56.896846   11602 kubeadm.go:310] 
	I0924 18:20:56.896915   11602 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 18:20:56.897013   11602 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 18:20:56.897102   11602 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 18:20:56.897113   11602 kubeadm.go:310] 
	I0924 18:20:56.897214   11602 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 18:20:56.897334   11602 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 18:20:56.897344   11602 kubeadm.go:310] 
	I0924 18:20:56.897455   11602 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wfmddn.jqm9ftj1c9z5a6vs \
	I0924 18:20:56.897590   11602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 18:20:56.897626   11602 kubeadm.go:310] 	--control-plane 
	I0924 18:20:56.897639   11602 kubeadm.go:310] 
	I0924 18:20:56.897747   11602 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 18:20:56.897756   11602 kubeadm.go:310] 
	I0924 18:20:56.897876   11602 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wfmddn.jqm9ftj1c9z5a6vs \
	I0924 18:20:56.898032   11602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 18:20:56.898926   11602 kubeadm.go:310] W0924 18:20:46.938376     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:20:56.899246   11602 kubeadm.go:310] W0924 18:20:46.939040     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:20:56.899401   11602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 18:20:56.899428   11602 cni.go:84] Creating CNI manager for ""
	I0924 18:20:56.899438   11602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 18:20:56.901322   11602 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 18:20:56.902863   11602 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 18:20:56.914363   11602 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 18:20:56.930973   11602 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 18:20:56.931114   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:56.931143   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-218885 minikube.k8s.io/updated_at=2024_09_24T18_20_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=addons-218885 minikube.k8s.io/primary=true
	I0924 18:20:57.076312   11602 ops.go:34] apiserver oom_adj: -16
	I0924 18:20:57.076379   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:57.576425   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:58.077347   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:58.577119   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:59.076927   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:59.577230   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:00.077137   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:00.577008   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:00.658298   11602 kubeadm.go:1113] duration metric: took 3.727240888s to wait for elevateKubeSystemPrivileges
	I0924 18:21:00.658328   11602 kubeadm.go:394] duration metric: took 13.888161582s to StartCluster
	I0924 18:21:00.658352   11602 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:21:00.658482   11602 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:21:00.658929   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:21:00.659138   11602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0924 18:21:00.659158   11602 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:21:00.659219   11602 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0924 18:21:00.659336   11602 addons.go:69] Setting yakd=true in profile "addons-218885"
	I0924 18:21:00.659349   11602 addons.go:69] Setting inspektor-gadget=true in profile "addons-218885"
	I0924 18:21:00.659352   11602 addons.go:69] Setting default-storageclass=true in profile "addons-218885"
	I0924 18:21:00.659366   11602 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-218885"
	I0924 18:21:00.659371   11602 addons.go:69] Setting volcano=true in profile "addons-218885"
	I0924 18:21:00.659357   11602 addons.go:234] Setting addon yakd=true in "addons-218885"
	I0924 18:21:00.659381   11602 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-218885"
	I0924 18:21:00.659390   11602 addons.go:69] Setting volumesnapshots=true in profile "addons-218885"
	I0924 18:21:00.659393   11602 addons.go:69] Setting ingress=true in profile "addons-218885"
	I0924 18:21:00.659399   11602 addons.go:234] Setting addon volumesnapshots=true in "addons-218885"
	I0924 18:21:00.659414   11602 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-218885"
	I0924 18:21:00.659374   11602 addons.go:234] Setting addon inspektor-gadget=true in "addons-218885"
	I0924 18:21:00.659424   11602 addons.go:234] Setting addon ingress=true in "addons-218885"
	I0924 18:21:00.659424   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659447   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659367   11602 addons.go:69] Setting storage-provisioner=true in profile "addons-218885"
	I0924 18:21:00.659550   11602 addons.go:234] Setting addon storage-provisioner=true in "addons-218885"
	I0924 18:21:00.659573   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659418   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659383   11602 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-218885"
	I0924 18:21:00.659842   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.659864   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.659875   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659887   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659936   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.659968   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659842   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.659993   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659385   11602 addons.go:234] Setting addon volcano=true in "addons-218885"
	I0924 18:21:00.659395   11602 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-218885"
	I0924 18:21:00.660031   11602 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-218885"
	I0924 18:21:00.659449   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.660131   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.660177   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660206   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.660213   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.660215   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660246   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659454   11602 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-218885"
	I0924 18:21:00.659458   11602 addons.go:69] Setting gcp-auth=true in profile "addons-218885"
	I0924 18:21:00.660330   11602 mustload.go:65] Loading cluster: addons-218885
	I0924 18:21:00.660373   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660401   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.660467   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660487   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659438   11602 addons.go:69] Setting registry=true in profile "addons-218885"
	I0924 18:21:00.660541   11602 config.go:182] Loaded profile config "addons-218885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:21:00.660588   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660542   11602 addons.go:234] Setting addon registry=true in "addons-218885"
	I0924 18:21:00.660620   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659357   11602 config.go:182] Loaded profile config "addons-218885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:21:00.660726   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659470   11602 addons.go:69] Setting ingress-dns=true in profile "addons-218885"
	I0924 18:21:00.660749   11602 addons.go:234] Setting addon ingress-dns=true in "addons-218885"
	I0924 18:21:00.660774   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.660816   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.660882   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660899   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.661056   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.661141   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.661204   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.661240   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.661266   11602 out.go:177] * Verifying Kubernetes components...
	I0924 18:21:00.661080   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.661384   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659461   11602 addons.go:69] Setting cloud-spanner=true in profile "addons-218885"
	I0924 18:21:00.661444   11602 addons.go:234] Setting addon cloud-spanner=true in "addons-218885"
	I0924 18:21:00.661469   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659384   11602 addons.go:69] Setting metrics-server=true in profile "addons-218885"
	I0924 18:21:00.661619   11602 addons.go:234] Setting addon metrics-server=true in "addons-218885"
	I0924 18:21:00.661644   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.661822   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.661841   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.661979   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.662002   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.672130   11602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:21:00.680735   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43731
	I0924 18:21:00.680735   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0924 18:21:00.681044   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34977
	I0924 18:21:00.681236   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.681465   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0924 18:21:00.681785   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.681838   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.681788   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.682083   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.682102   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.682225   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.682240   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.682295   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0924 18:21:00.682410   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.682419   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.682537   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.682552   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.682600   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.682643   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.682683   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.682749   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.691487   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.691518   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.691625   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I0924 18:21:00.691743   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.691812   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35801
	I0924 18:21:00.691839   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.691926   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.691968   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.692170   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.692210   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.692229   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.692243   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.692638   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.692695   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.692721   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.693073   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.693157   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.693172   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.693195   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.693371   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.693596   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.693635   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.693926   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.694150   11602 addons.go:234] Setting addon default-storageclass=true in "addons-218885"
	I0924 18:21:00.694198   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.694456   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.694483   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.694546   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.694577   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.695678   11602 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-218885"
	I0924 18:21:00.695724   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.696084   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.696123   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.699951   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.700319   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.700355   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.713968   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0924 18:21:00.714463   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.715097   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.715118   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.715521   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.715582   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0924 18:21:00.716260   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.716297   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.716505   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.724819   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38155
	I0924 18:21:00.725028   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0924 18:21:00.725630   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.726076   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.726173   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.726195   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.726596   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.727232   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.727266   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.727423   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0924 18:21:00.728015   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.728034   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.728196   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
	I0924 18:21:00.728690   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.728703   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.728762   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.729325   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.729349   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.729621   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.729633   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.729639   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.729653   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.730009   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.730051   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.730921   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.731302   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.731334   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.732823   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.733011   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:00.733030   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:00.734792   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:00.734795   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:00.734814   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:00.734823   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:00.734840   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:00.735052   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:00.735064   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	W0924 18:21:00.735144   11602 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0924 18:21:00.748351   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I0924 18:21:00.750720   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
	I0924 18:21:00.750728   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I0924 18:21:00.751162   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.751247   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.751319   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.751440   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.751456   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.751567   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.751585   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.756724   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35171
	I0924 18:21:00.756730   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.756778   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40953
	I0924 18:21:00.756725   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0924 18:21:00.756847   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.756861   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.756930   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.757362   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.757369   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.757379   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.757891   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.757905   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.757921   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.757933   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.757908   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.758358   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.758456   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.758697   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.758697   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.759435   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39903
	I0924 18:21:00.759442   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.759636   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.759920   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.760023   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.760374   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.760387   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.760408   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.760480   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.760610   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.761142   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.761179   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.761488   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.761503   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.761847   11602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0924 18:21:00.761975   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.762202   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.763064   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0924 18:21:00.763905   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.764201   11602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 18:21:00.764244   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.764687   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.764830   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.764843   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.765214   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.765520   11602 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0924 18:21:00.765754   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.765882   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.766152   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0924 18:21:00.766854   11602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 18:21:00.766965   11602 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 18:21:00.767251   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0924 18:21:00.767271   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.767695   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0924 18:21:00.767715   11602 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0924 18:21:00.767749   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.768686   11602 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0924 18:21:00.768698   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0924 18:21:00.768713   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.770065   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39951
	I0924 18:21:00.770538   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.771457   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.771477   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.771872   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.772426   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.772458   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.773088   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34187
	I0924 18:21:00.774506   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.774988   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.775391   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.775411   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.775446   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39697
	I0924 18:21:00.775557   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.775742   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43801
	I0924 18:21:00.775762   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.776043   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.776070   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.776313   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0924 18:21:00.776431   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.776447   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.776497   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.776748   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.776767   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.776798   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.776829   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.777190   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.777241   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.777281   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.777317   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.777820   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.777981   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.778090   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.778249   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.778261   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.778415   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.778483   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.778799   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37345
	I0924 18:21:00.779313   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.779385   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0924 18:21:00.779922   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.779987   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45887
	I0924 18:21:00.780000   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.780014   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.780328   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.781720   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0924 18:21:00.781840   11602 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0924 18:21:00.783389   11602 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0924 18:21:00.783406   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0924 18:21:00.783408   11602 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0924 18:21:00.783426   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.785670   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0924 18:21:00.786392   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.786875   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.786904   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.787147   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.787290   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.787460   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.787571   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.787929   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0924 18:21:00.789553   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0924 18:21:00.789818   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0924 18:21:00.790777   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0924 18:21:00.790798   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0924 18:21:00.790817   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.791841   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38635
	I0924 18:21:00.793491   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.793863   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.793884   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.794037   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.794196   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.794343   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.794479   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.795306   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.795325   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.795413   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.795716   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.795878   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.795893   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.795928   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.795965   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.796083   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.796101   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.796213   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.796228   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.796239   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.796382   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.796422   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.796444   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.796634   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.796692   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.797108   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.797124   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.797174   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.797214   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.797254   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.797672   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.797708   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.797893   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.797947   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.798167   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.799160   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36489
	I0924 18:21:00.799285   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.799329   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.799809   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.800183   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.800664   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.800835   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.800844   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.801181   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.801262   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.801710   11602 out.go:177]   - Using image docker.io/registry:2.8.3
	I0924 18:21:00.801722   11602 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 18:21:00.801827   11602 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0924 18:21:00.802746   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.802972   11602 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0924 18:21:00.803140   11602 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:21:00.803158   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 18:21:00.803175   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.803332   11602 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0924 18:21:00.803346   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0924 18:21:00.803360   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.804116   11602 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0924 18:21:00.804171   11602 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0924 18:21:00.804317   11602 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0924 18:21:00.804328   11602 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0924 18:21:00.804343   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.806039   11602 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0924 18:21:00.806052   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0924 18:21:00.806068   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.807823   11602 out.go:177]   - Using image docker.io/busybox:stable
	I0924 18:21:00.807997   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.808507   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.808913   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.808939   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.809198   11602 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 18:21:00.809214   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0924 18:21:00.809230   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.809866   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.809901   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.809952   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.809996   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.810009   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.810036   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.810052   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.810069   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.810710   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.810758   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.810762   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.810798   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.810928   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.810938   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.810973   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.811072   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.811124   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.811175   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.811575   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.811599   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.811747   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.811961   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.812105   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.812231   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.813492   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.813801   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.813819   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.813949   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.814102   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.814242   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.814374   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.819089   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0924 18:21:00.819472   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.819662   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36757
	I0924 18:21:00.819981   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.819993   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.820026   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.820352   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.820499   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.820570   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.820585   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.820921   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.821036   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.822394   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.822536   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.822579   11602 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 18:21:00.822590   11602 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 18:21:00.822614   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.824394   11602 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0924 18:21:00.825222   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.825626   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.825642   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.825660   11602 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 18:21:00.825679   11602 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 18:21:00.825698   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.825895   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.826045   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.826169   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.826315   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.828341   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.828768   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.828797   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.828911   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.829107   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.829220   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.829309   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.833381   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38481
	I0924 18:21:00.833708   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.834195   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.834214   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.834741   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.834967   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.836909   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.838863   11602 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0924 18:21:00.840172   11602 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0924 18:21:00.840190   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0924 18:21:00.840204   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.843461   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.843939   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.843964   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.844120   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.844264   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.844395   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.844488   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.925784   11602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0924 18:21:00.967714   11602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:21:01.124083   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0924 18:21:01.139520   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0924 18:21:01.209659   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0924 18:21:01.209681   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0924 18:21:01.211490   11602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0924 18:21:01.211509   11602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0924 18:21:01.230706   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 18:21:01.259266   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 18:21:01.265419   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0924 18:21:01.265444   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0924 18:21:01.267525   11602 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0924 18:21:01.267542   11602 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0924 18:21:01.270870   11602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 18:21:01.270886   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0924 18:21:01.294065   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:21:01.302436   11602 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0924 18:21:01.302464   11602 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0924 18:21:01.303436   11602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0924 18:21:01.303457   11602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0924 18:21:01.336902   11602 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0924 18:21:01.336926   11602 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0924 18:21:01.390129   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 18:21:01.405905   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0924 18:21:01.443401   11602 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0924 18:21:01.443421   11602 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0924 18:21:01.460206   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0924 18:21:01.460233   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0924 18:21:01.489629   11602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 18:21:01.489659   11602 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 18:21:01.516924   11602 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0924 18:21:01.516952   11602 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0924 18:21:01.527602   11602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0924 18:21:01.527630   11602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0924 18:21:01.530327   11602 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0924 18:21:01.530344   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0924 18:21:01.544683   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0924 18:21:01.544711   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0924 18:21:01.689986   11602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 18:21:01.690011   11602 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 18:21:01.705932   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0924 18:21:01.705958   11602 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0924 18:21:01.740697   11602 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0924 18:21:01.740721   11602 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0924 18:21:01.775169   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 18:21:01.804259   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0924 18:21:01.804283   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0924 18:21:01.819198   11602 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0924 18:21:01.819230   11602 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0924 18:21:01.827355   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0924 18:21:01.855195   11602 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:21:01.855219   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0924 18:21:01.951137   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0924 18:21:01.951166   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0924 18:21:01.969440   11602 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0924 18:21:01.969463   11602 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0924 18:21:02.069888   11602 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0924 18:21:02.069915   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0924 18:21:02.099859   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:21:02.231068   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0924 18:21:02.231095   11602 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0924 18:21:02.305967   11602 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0924 18:21:02.305990   11602 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0924 18:21:02.390434   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0924 18:21:02.434755   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0924 18:21:02.434778   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0924 18:21:02.586683   11602 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0924 18:21:02.586715   11602 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0924 18:21:02.733250   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0924 18:21:02.733348   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0924 18:21:02.792924   11602 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 18:21:02.792950   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0924 18:21:03.055872   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 18:21:03.055895   11602 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0924 18:21:03.132217   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 18:21:03.134229   11602 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.208412456s)
	I0924 18:21:03.134255   11602 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0924 18:21:03.134280   11602 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.166538652s)
	I0924 18:21:03.134987   11602 node_ready.go:35] waiting up to 6m0s for node "addons-218885" to be "Ready" ...
	I0924 18:21:03.139952   11602 node_ready.go:49] node "addons-218885" has status "Ready":"True"
	I0924 18:21:03.139976   11602 node_ready.go:38] duration metric: took 4.969165ms for node "addons-218885" to be "Ready" ...
	I0924 18:21:03.139986   11602 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:21:03.150885   11602 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:03.433867   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 18:21:03.668937   11602 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-218885" context rescaled to 1 replicas
	I0924 18:21:03.814522   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.690406906s)
	I0924 18:21:03.814578   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:03.814590   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:03.814905   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:03.814918   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:03.814925   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:03.814936   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:03.814944   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:03.815212   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:03.815229   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:05.193674   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:07.675146   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:07.776279   11602 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0924 18:21:07.776319   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:07.779561   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:07.780040   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:07.780063   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:07.780297   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:07.780488   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:07.780661   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:07.780787   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:07.972822   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.833257544s)
	I0924 18:21:07.972874   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.972887   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.972834   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.742092294s)
	I0924 18:21:07.972905   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.713615593s)
	I0924 18:21:07.972935   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.972950   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.972937   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973033   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973066   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.567135616s)
	I0924 18:21:07.973034   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.582880208s)
	I0924 18:21:07.972999   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.678903665s)
	I0924 18:21:07.973102   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973112   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973145   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973154   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973200   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973225   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973225   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973227   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973230   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.198037335s)
	I0924 18:21:07.973239   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973240   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.973240   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973249   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973251   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973257   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973257   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.973262   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973267   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973263   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.14588148s)
	I0924 18:21:07.973276   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973283   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973292   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973374   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.873477591s)
	W0924 18:21:07.973425   11602 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0924 18:21:07.973466   11602 retry.go:31] will retry after 341.273334ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0924 18:21:07.973483   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973512   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973519   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.973526   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973532   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973533   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973543   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.973551   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973557   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973595   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.583073615s)
	I0924 18:21:07.973620   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973630   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973771   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973814   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973815   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.841563977s)
	I0924 18:21:07.973828   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973844   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973850   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.973858   11602 addons.go:475] Verifying addon metrics-server=true in "addons-218885"
	I0924 18:21:07.973878   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973891   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973971   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973979   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.974078   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.974087   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.974094   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.974100   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.974255   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.974275   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.974281   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.974287   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.974292   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.974331   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.974353   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.974359   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.974366   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.974373   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.974966   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.974991   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.974998   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.975194   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.975217   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.975223   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.975231   11602 addons.go:475] Verifying addon registry=true in "addons-218885"
	I0924 18:21:07.975723   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.975745   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.975768   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.975774   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.975931   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.975939   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.975946   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.975952   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.976518   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.976541   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.976548   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.976693   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.976707   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.976717   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.976725   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.976754   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.976765   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.976773   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.976780   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.976888   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.976902   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.976910   11602 addons.go:475] Verifying addon ingress=true in "addons-218885"
	I0924 18:21:07.976949   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.977417   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.977442   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.977448   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.976973   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.977553   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.978625   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.978641   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.979225   11602 out.go:177] * Verifying registry addon...
	I0924 18:21:07.979369   11602 out.go:177] * Verifying ingress addon...
	I0924 18:21:07.980099   11602 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-218885 service yakd-dashboard -n yakd-dashboard
	
	I0924 18:21:07.981598   11602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0924 18:21:07.981987   11602 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0924 18:21:07.999231   11602 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0924 18:21:07.999256   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:07.999600   11602 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0924 18:21:07.999619   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:08.005488   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:08.005509   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:08.005801   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:08.005847   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:08.005864   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:08.017897   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:08.017922   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:08.018287   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:08.018306   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:08.058607   11602 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0924 18:21:08.094929   11602 addons.go:234] Setting addon gcp-auth=true in "addons-218885"
	I0924 18:21:08.094992   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:08.095419   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:08.095475   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:08.110585   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34813
	I0924 18:21:08.111040   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:08.111584   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:08.111611   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:08.111964   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:08.112535   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:08.112578   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:08.127155   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0924 18:21:08.127631   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:08.128121   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:08.128146   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:08.128433   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:08.128606   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:08.130080   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:08.130278   11602 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0924 18:21:08.130305   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:08.133126   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:08.133582   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:08.133611   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:08.133777   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:08.133930   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:08.134104   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:08.134250   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:08.315216   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:21:08.488445   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:08.488845   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:09.002788   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:09.003393   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:09.077458   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.643536692s)
	I0924 18:21:09.077506   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:09.077519   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:09.077783   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:09.077837   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:09.077851   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:09.077853   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:09.077867   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:09.078166   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:09.078214   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:09.078225   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:09.078240   11602 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-218885"
	I0924 18:21:09.079280   11602 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0924 18:21:09.080127   11602 out.go:177] * Verifying csi-hostpath-driver addon...
	I0924 18:21:09.081849   11602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 18:21:09.082510   11602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0924 18:21:09.083069   11602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0924 18:21:09.083086   11602 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0924 18:21:09.113707   11602 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0924 18:21:09.113739   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:09.175252   11602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0924 18:21:09.175277   11602 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0924 18:21:09.215574   11602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 18:21:09.215599   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0924 18:21:09.270926   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 18:21:09.486696   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:09.486738   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:09.587547   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:09.986544   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:09.987121   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:10.087460   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:10.156758   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:10.264232   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.948944982s)
	I0924 18:21:10.264285   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:10.264299   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:10.264666   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:10.264719   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:10.264726   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:10.264738   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:10.264746   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:10.264961   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:10.264973   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:10.556445   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:10.559448   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:10.822097   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:10.873812   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.602842869s)
	I0924 18:21:10.873863   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:10.873886   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:10.874154   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:10.874174   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:10.874183   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:10.874191   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:10.874219   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:10.874421   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:10.874465   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:10.874474   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:10.876389   11602 addons.go:475] Verifying addon gcp-auth=true in "addons-218885"
	I0924 18:21:10.878112   11602 out.go:177] * Verifying gcp-auth addon...
	I0924 18:21:10.879991   11602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0924 18:21:10.914619   11602 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0924 18:21:10.914644   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:10.986616   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:10.987116   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:11.087458   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:11.383545   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:11.486763   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:11.486957   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:11.640030   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:11.884322   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:11.985458   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:11.986775   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:12.088092   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:12.156950   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:12.383370   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:12.485195   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:12.487941   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:12.587459   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:12.883672   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:12.986303   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:12.986526   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:13.087330   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:13.385285   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:13.485959   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:13.486129   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:13.586793   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:13.884002   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:13.985294   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:13.987442   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:14.087331   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:14.384138   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:14.485676   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:14.486525   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:14.587163   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:14.673311   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:14.883885   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:14.985667   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:14.987837   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:15.087254   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:15.538287   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:15.538499   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:15.538661   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:15.587780   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:15.883673   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:15.986434   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:15.986755   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:16.087600   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:16.161186   11602 pod_ready.go:98] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:15 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.215 HostIPs:[{IP:192.168.39
.215}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-24 18:21:01 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-24 18:21:05 +0000 UTC,FinishedAt:2024-09-24 18:21:15 +0000 UTC,ContainerID:cri-o://1a7be4e5a265f4ca7f7b1e9046b67fbd27b0a2df0b4180e732ae601ee76e0003,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://1a7be4e5a265f4ca7f7b1e9046b67fbd27b0a2df0b4180e732ae601ee76e0003 Started:0xc0016c5620 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001432d90} {Name:kube-api-access-dx4gt MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001432da0}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0924 18:21:16.161211   11602 pod_ready.go:82] duration metric: took 13.010302575s for pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace to be "Ready" ...
	E0924 18:21:16.161224   11602 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:15 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.215 HostIPs:[{IP:192.168.39.215}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-24 18:21:01 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-24 18:21:05 +0000 UTC,FinishedAt:2024-09-24 18:21:15 +0000 UTC,ContainerID:cri-o://1a7be4e5a265f4ca7f7b1e9046b67fbd27b0a2df0b4180e732ae601ee76e0003,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://1a7be4e5a265f4ca7f7b1e9046b67fbd27b0a2df0b4180e732ae601ee76e0003 Started:0xc0016c5620 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001432d90} {Name:kube-api-access-dx4gt MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001432da0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0924 18:21:16.161239   11602 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:16.383548   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:16.486230   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:16.487442   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:16.586690   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:16.884631   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:16.986006   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:16.986774   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:17.087310   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:17.383898   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:17.486612   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:17.487453   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:17.586919   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:17.883638   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:17.987330   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:17.987849   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:18.089144   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:18.167517   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:18.383520   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:18.486806   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:18.486918   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:18.588925   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:18.883462   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:18.986014   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:18.986560   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:19.086554   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:19.383070   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:19.484874   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:19.486920   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:19.587560   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:19.883992   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:19.986152   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:19.987408   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:20.086874   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:20.383440   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:20.486268   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:20.486550   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:20.791631   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:20.793936   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:20.883763   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:20.986920   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:20.987056   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:21.088233   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:21.383254   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:21.486556   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:21.486845   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:21.587198   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:21.884631   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:21.986396   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:21.986589   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:22.087981   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:22.383307   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:22.486130   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:22.487114   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:22.587895   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:22.883205   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:22.986726   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:22.987810   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:23.087527   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:23.167137   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:23.382922   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:23.486893   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:23.487141   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:23.586653   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:23.887051   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:23.992735   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:23.993112   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:24.088192   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:24.384102   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:24.485524   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:24.486088   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:24.588291   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:24.883718   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:24.986064   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:24.986669   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:25.086972   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:25.167765   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:25.385694   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:25.487039   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:25.487327   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:25.587485   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:25.883440   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:25.987089   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:25.987473   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:26.087677   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:26.383334   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:26.486844   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:26.487823   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:26.586734   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:26.883494   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:26.986274   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:26.986679   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:27.087587   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:27.383764   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:27.486172   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:27.486167   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:27.586436   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:27.667175   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:27.883579   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:27.986382   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:27.986773   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:28.086697   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:28.383293   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:28.493330   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:28.505220   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:28.586892   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:28.883915   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:28.985128   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:28.986961   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:29.086970   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:29.382946   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:29.485425   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:29.487089   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:29.587540   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:29.670087   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:29.884302   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:29.985838   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:29.986275   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:30.086421   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:30.385253   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:30.485483   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:30.486689   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:30.588361   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:30.883735   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:30.986783   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:30.987125   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:31.088911   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:31.385049   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:31.486543   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:31.486992   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:31.587160   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:31.883656   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:31.985711   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:31.986231   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:32.086502   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:32.167554   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:32.384448   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:32.486308   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:32.486463   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:32.587554   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:32.883253   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:32.987205   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:32.987734   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:33.087771   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:33.384995   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:33.486934   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:33.487318   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:33.586663   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:33.884321   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:33.986319   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:33.987702   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:34.087618   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:34.168690   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:34.387765   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:34.485791   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:34.486938   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:34.587761   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:34.884048   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:34.985832   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:34.986032   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:35.087501   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:35.386323   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:35.486147   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:35.486397   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:35.586931   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:35.884466   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:35.987056   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:35.987253   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:36.086959   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:36.383855   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:36.486473   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:36.486749   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:36.586935   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:36.667520   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:36.884713   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:36.985614   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:36.987395   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:37.094813   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:37.383846   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:37.486004   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:37.486280   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:37.588888   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:38.231455   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:38.234409   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:38.239132   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:38.239417   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:38.383733   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:38.486322   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:38.486594   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:38.587058   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:38.667664   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:38.883555   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:38.986183   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:38.986218   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:39.086393   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:39.383891   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:39.485904   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:39.486274   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:39.586892   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:39.883990   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:39.985035   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:39.986333   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:40.086738   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:40.383797   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:40.486109   11602 kapi.go:107] duration metric: took 32.504507933s to wait for kubernetes.io/minikube-addons=registry ...
	I0924 18:21:40.486350   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:40.586745   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:40.882856   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:40.986205   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:41.086472   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:41.167497   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:41.384061   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:41.486569   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:41.587079   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:41.883691   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:41.987379   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:42.086661   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:42.592448   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:42.593329   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:42.593353   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:42.884026   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:42.986740   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:43.087210   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:43.384130   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:43.486932   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:43.587734   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:43.671555   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:43.884139   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:43.986534   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:44.087447   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:44.383601   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:44.486943   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:44.587092   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:44.883703   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:44.986744   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:45.086822   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:45.384617   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:45.486345   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:45.586804   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:45.674703   11602 pod_ready.go:93] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:45.674728   11602 pod_ready.go:82] duration metric: took 29.513479171s for pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.674737   11602 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.682099   11602 pod_ready.go:93] pod "etcd-addons-218885" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:45.682125   11602 pod_ready.go:82] duration metric: took 7.380934ms for pod "etcd-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.682136   11602 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.727932   11602 pod_ready.go:93] pod "kube-apiserver-addons-218885" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:45.727960   11602 pod_ready.go:82] duration metric: took 45.815667ms for pod "kube-apiserver-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.727973   11602 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.736186   11602 pod_ready.go:93] pod "kube-controller-manager-addons-218885" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:45.736205   11602 pod_ready.go:82] duration metric: took 8.225404ms for pod "kube-controller-manager-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.736216   11602 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jsjnj" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.741087   11602 pod_ready.go:93] pod "kube-proxy-jsjnj" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:45.741103   11602 pod_ready.go:82] duration metric: took 4.881511ms for pod "kube-proxy-jsjnj" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.741111   11602 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.883401   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:45.988310   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:46.066604   11602 pod_ready.go:93] pod "kube-scheduler-addons-218885" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:46.066631   11602 pod_ready.go:82] duration metric: took 325.512397ms for pod "kube-scheduler-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:46.066644   11602 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-qhkcp" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:46.087500   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:46.384729   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:46.465983   11602 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-qhkcp" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:46.466004   11602 pod_ready.go:82] duration metric: took 399.352493ms for pod "nvidia-device-plugin-daemonset-qhkcp" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:46.466012   11602 pod_ready.go:39] duration metric: took 43.326012607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:21:46.466029   11602 api_server.go:52] waiting for apiserver process to appear ...
	I0924 18:21:46.466084   11602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:21:46.483386   11602 api_server.go:72] duration metric: took 45.824195071s to wait for apiserver process to appear ...
	I0924 18:21:46.483405   11602 api_server.go:88] waiting for apiserver healthz status ...
	I0924 18:21:46.483425   11602 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I0924 18:21:46.486475   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:46.489100   11602 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I0924 18:21:46.490451   11602 api_server.go:141] control plane version: v1.31.1
	I0924 18:21:46.490474   11602 api_server.go:131] duration metric: took 7.061904ms to wait for apiserver health ...
	I0924 18:21:46.490484   11602 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 18:21:46.588064   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:46.672865   11602 system_pods.go:59] 17 kube-system pods found
	I0924 18:21:46.672904   11602 system_pods.go:61] "coredns-7c65d6cfc9-wbgv9" [b793eb56-d95a-49f1-8294-1ab4837d5d36] Running
	I0924 18:21:46.672916   11602 system_pods.go:61] "csi-hostpath-attacher-0" [f054c47d-be0e-47ac-bb9a-665fff0e4ccc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0924 18:21:46.672926   11602 system_pods.go:61] "csi-hostpath-resizer-0" [aea59387-31a8-4570-aa63-aaa5b6a54eb7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0924 18:21:46.672936   11602 system_pods.go:61] "csi-hostpathplugin-rjjfm" [1af2c700-d42a-499b-89c3-badfa6dae8c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0924 18:21:46.672942   11602 system_pods.go:61] "etcd-addons-218885" [a288635e-a61b-4c7d-b1dc-90910c161b87] Running
	I0924 18:21:46.672948   11602 system_pods.go:61] "kube-apiserver-addons-218885" [af891cb5-c6e3-43c5-a480-76844da48620] Running
	I0924 18:21:46.672954   11602 system_pods.go:61] "kube-controller-manager-addons-218885" [2df23cca-721a-4fe5-8c91-8c3207ce708e] Running
	I0924 18:21:46.672962   11602 system_pods.go:61] "kube-ingress-dns-minikube" [209a83c9-7b47-44e1-8897-682ab287a114] Running
	I0924 18:21:46.672971   11602 system_pods.go:61] "kube-proxy-jsjnj" [07996bfd-1ae9-4e9c-9148-14966458de66] Running
	I0924 18:21:46.672979   11602 system_pods.go:61] "kube-scheduler-addons-218885" [43c814bf-b252-4f9f-a5e1-50a0e68c2ff3] Running
	I0924 18:21:46.672987   11602 system_pods.go:61] "metrics-server-84c5f94fbc-pkzn4" [65ed5b0c-3307-4c48-b8dc-666848d353fc] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 18:21:46.672995   11602 system_pods.go:61] "nvidia-device-plugin-daemonset-qhkcp" [2d4afd4b-8f05-4a66-aecf-ac6db891b2a7] Running
	I0924 18:21:46.673003   11602 system_pods.go:61] "registry-66c9cd494c-b94p9" [bb39eff0-510f-4e28-b3b7-a246e7ca880c] Running
	I0924 18:21:46.673007   11602 system_pods.go:61] "registry-proxy-wpjp5" [e715cd68-83d0-4850-abc2-b9a3f139e6f8] Running
	I0924 18:21:46.673014   11602 system_pods.go:61] "snapshot-controller-56fcc65765-775tk" [a08ba94c-acb8-4274-8018-b576a56c94f1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:21:46.673022   11602 system_pods.go:61] "snapshot-controller-56fcc65765-q2xbm" [8316fcb5-fc58-46a5-821d-790e06ea09ed] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:21:46.673027   11602 system_pods.go:61] "storage-provisioner" [43a66ff5-32a5-4cdb-9073-da217f1138f1] Running
	I0924 18:21:46.673035   11602 system_pods.go:74] duration metric: took 182.544371ms to wait for pod list to return data ...
	I0924 18:21:46.673044   11602 default_sa.go:34] waiting for default service account to be created ...
	I0924 18:21:46.864990   11602 default_sa.go:45] found service account: "default"
	I0924 18:21:46.865016   11602 default_sa.go:55] duration metric: took 191.965785ms for default service account to be created ...
	I0924 18:21:46.865028   11602 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 18:21:46.884297   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:46.986602   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:47.070157   11602 system_pods.go:86] 17 kube-system pods found
	I0924 18:21:47.070185   11602 system_pods.go:89] "coredns-7c65d6cfc9-wbgv9" [b793eb56-d95a-49f1-8294-1ab4837d5d36] Running
	I0924 18:21:47.070195   11602 system_pods.go:89] "csi-hostpath-attacher-0" [f054c47d-be0e-47ac-bb9a-665fff0e4ccc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0924 18:21:47.070203   11602 system_pods.go:89] "csi-hostpath-resizer-0" [aea59387-31a8-4570-aa63-aaa5b6a54eb7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0924 18:21:47.070211   11602 system_pods.go:89] "csi-hostpathplugin-rjjfm" [1af2c700-d42a-499b-89c3-badfa6dae8c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0924 18:21:47.070215   11602 system_pods.go:89] "etcd-addons-218885" [a288635e-a61b-4c7d-b1dc-90910c161b87] Running
	I0924 18:21:47.070219   11602 system_pods.go:89] "kube-apiserver-addons-218885" [af891cb5-c6e3-43c5-a480-76844da48620] Running
	I0924 18:21:47.070223   11602 system_pods.go:89] "kube-controller-manager-addons-218885" [2df23cca-721a-4fe5-8c91-8c3207ce708e] Running
	I0924 18:21:47.070226   11602 system_pods.go:89] "kube-ingress-dns-minikube" [209a83c9-7b47-44e1-8897-682ab287a114] Running
	I0924 18:21:47.070229   11602 system_pods.go:89] "kube-proxy-jsjnj" [07996bfd-1ae9-4e9c-9148-14966458de66] Running
	I0924 18:21:47.070232   11602 system_pods.go:89] "kube-scheduler-addons-218885" [43c814bf-b252-4f9f-a5e1-50a0e68c2ff3] Running
	I0924 18:21:47.070237   11602 system_pods.go:89] "metrics-server-84c5f94fbc-pkzn4" [65ed5b0c-3307-4c48-b8dc-666848d353fc] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 18:21:47.070240   11602 system_pods.go:89] "nvidia-device-plugin-daemonset-qhkcp" [2d4afd4b-8f05-4a66-aecf-ac6db891b2a7] Running
	I0924 18:21:47.070243   11602 system_pods.go:89] "registry-66c9cd494c-b94p9" [bb39eff0-510f-4e28-b3b7-a246e7ca880c] Running
	I0924 18:21:47.070246   11602 system_pods.go:89] "registry-proxy-wpjp5" [e715cd68-83d0-4850-abc2-b9a3f139e6f8] Running
	I0924 18:21:47.070253   11602 system_pods.go:89] "snapshot-controller-56fcc65765-775tk" [a08ba94c-acb8-4274-8018-b576a56c94f1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:21:47.070257   11602 system_pods.go:89] "snapshot-controller-56fcc65765-q2xbm" [8316fcb5-fc58-46a5-821d-790e06ea09ed] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:21:47.070261   11602 system_pods.go:89] "storage-provisioner" [43a66ff5-32a5-4cdb-9073-da217f1138f1] Running
	I0924 18:21:47.070266   11602 system_pods.go:126] duration metric: took 205.232474ms to wait for k8s-apps to be running ...
	I0924 18:21:47.070273   11602 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 18:21:47.070316   11602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:21:47.087696   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:47.088486   11602 system_svc.go:56] duration metric: took 18.204875ms WaitForService to wait for kubelet
	I0924 18:21:47.088509   11602 kubeadm.go:582] duration metric: took 46.429320046s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:21:47.088529   11602 node_conditions.go:102] verifying NodePressure condition ...
	I0924 18:21:47.266397   11602 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:21:47.266422   11602 node_conditions.go:123] node cpu capacity is 2
	I0924 18:21:47.266433   11602 node_conditions.go:105] duration metric: took 177.899279ms to run NodePressure ...
	I0924 18:21:47.266444   11602 start.go:241] waiting for startup goroutines ...
	I0924 18:21:47.383807   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:47.486627   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:47.592685   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:47.882809   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:47.988953   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:48.088085   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:48.384495   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:48.486920   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:48.587547   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:48.884003   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:48.986521   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:49.089118   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:49.384064   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:49.487365   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:49.586764   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:49.883741   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:49.986565   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:50.086791   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:50.383210   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:50.486863   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:50.586794   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:50.883384   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:50.986147   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:51.087529   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:51.383646   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:51.487904   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:51.587015   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:51.883461   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:51.986235   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:52.087462   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:52.383965   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:52.485684   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:52.586927   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:53.043269   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:53.044081   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:53.086805   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:53.384041   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:53.489996   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:53.588300   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:53.884430   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:53.986023   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:54.088358   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:54.384017   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:54.486355   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:54.587249   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:54.883465   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:54.986368   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:55.088397   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:55.387044   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:55.486136   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:55.587101   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:55.883331   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:55.986435   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:56.086566   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:56.383493   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:56.486431   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:56.587234   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:56.884841   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:56.986911   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:57.088106   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:57.384206   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:57.487256   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:57.587982   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:57.884019   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:57.994140   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:58.095443   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:58.383978   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:58.486983   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:58.587545   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:58.883975   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:58.986500   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:59.087389   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:59.388016   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:59.487717   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:59.591066   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:59.884701   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:59.986927   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:00.089353   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:00.385499   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:00.491326   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:00.586790   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:00.884136   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:00.986787   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:01.089833   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:01.388730   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:01.502425   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:01.597581   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:01.884562   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:01.989808   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:02.089518   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:02.384237   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:02.486541   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:02.587146   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:03.079446   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:03.080120   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:03.087562   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:03.383714   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:03.486549   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:03.587281   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:03.884126   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:03.987082   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:04.094340   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:04.384081   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:04.486442   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:04.586869   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:04.883281   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:04.985346   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:05.086875   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:05.385212   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:05.487246   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:05.587182   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:05.886629   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:05.987975   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:06.087851   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:06.383918   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:06.487588   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:06.587475   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:06.883377   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:06.986090   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:07.087419   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:07.384451   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:07.487315   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:07.588370   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:07.884884   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:07.988441   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:08.088256   11602 kapi.go:107] duration metric: took 59.005743641s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0924 18:22:08.384288   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:08.486671   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:08.883496   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:08.986150   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:09.384140   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:09.486763   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:09.883529   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:09.985845   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:10.383692   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:10.485952   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:10.883625   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:10.986197   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:11.383715   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:11.486007   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:11.883706   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:11.986310   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:12.383898   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:12.485858   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:12.883805   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:12.986764   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:13.385789   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:13.488283   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:13.884377   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:13.987274   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:14.386814   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:14.487614   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:14.884301   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:14.986008   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:15.385093   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:15.486500   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:15.884358   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:15.985775   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:16.383761   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:16.486006   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:16.883791   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:16.986849   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:17.592172   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:17.592689   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:17.883336   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:17.986491   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:18.383313   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:18.485567   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:18.883401   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:18.988696   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:19.384325   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:19.485836   11602 kapi.go:107] duration metric: took 1m11.503845867s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0924 18:22:19.883804   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:20.442509   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:20.884372   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:21.384165   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:21.883778   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:22.383574   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:22.883482   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:23.384312   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:23.884604   11602 kapi.go:107] duration metric: took 1m13.004608549s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0924 18:22:23.886195   11602 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-218885 cluster.
	I0924 18:22:23.887597   11602 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0924 18:22:23.888920   11602 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0924 18:22:23.890409   11602 out.go:177] * Enabled addons: cloud-spanner, metrics-server, ingress-dns, storage-provisioner, inspektor-gadget, nvidia-device-plugin, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0924 18:22:23.891803   11602 addons.go:510] duration metric: took 1m23.232581307s for enable addons: enabled=[cloud-spanner metrics-server ingress-dns storage-provisioner inspektor-gadget nvidia-device-plugin yakd default-storageclass storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0924 18:22:23.891846   11602 start.go:246] waiting for cluster config update ...
	I0924 18:22:23.891861   11602 start.go:255] writing updated cluster config ...
	I0924 18:22:23.892111   11602 ssh_runner.go:195] Run: rm -f paused
	I0924 18:22:23.942645   11602 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 18:22:23.944149   11602 out.go:177] * Done! kubectl is now configured to use "addons-218885" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.378640311Z" level=info msg="Removed container 8a204fd3c4d8e53692582534c6f702196abf1467b8a97036f2350d701d54f308: kube-system/registry-proxy-wpjp5/registry-proxy" file="server/container_remove.go:40" id=8adefbab-a945-439f-a7c1-26bd41029661 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.378829969Z" level=debug msg="Response: &RemoveContainerResponse{}" file="otel-collector/interceptors.go:74" id=8adefbab-a945-439f-a7c1-26bd41029661 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.379398205Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:8a204fd3c4d8e53692582534c6f702196abf1467b8a97036f2350d701d54f308,Verbose:false,}" file="otel-collector/interceptors.go:62" id=446ab1a4-31b1-47ac-a458-06b7461c766c name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.379461320Z" level=debug msg="Response error: rpc error: code = NotFound desc = could not find container \"8a204fd3c4d8e53692582534c6f702196abf1467b8a97036f2350d701d54f308\": container with ID starting with 8a204fd3c4d8e53692582534c6f702196abf1467b8a97036f2350d701d54f308 not found: ID does not exist" file="otel-collector/interceptors.go:71" id=446ab1a4-31b1-47ac-a458-06b7461c766c name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.407831449Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=378d9f6d-c7a7-42e0-9160-5c25f1fe401f name=/runtime.v1.RuntimeService/Version
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.407961042Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=378d9f6d-c7a7-42e0-9160-5c25f1fe401f name=/runtime.v1.RuntimeService/Version
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.409007632Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fab421b-851d-4e21-aba9-975951e26c1b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.410028892Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202698410003121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:536812,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fab421b-851d-4e21-aba9-975951e26c1b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.410452674Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b00cbd3-ce20-43a1-9765-7103b1fa95c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.410507199Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b00cbd3-ce20-43a1-9765-7103b1fa95c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.410918304Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6c6ac506dcfcf9cfa1fff11e877c73a7301129a6ecf5e053b30759b7d99cc78,PodSandboxId:c37c20e05e8843140c81b8018052241f2ed0e7c0fbd6e88c98b7ac0e926a9ade,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727202687633342951,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcc9442c-a1e0-46a5-9db8-d027ceac1950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c32a3e70b04b3420083cb712b54eb9c23f2cde598e34340031cd3698291f51e7,PodSandboxId:33cdef3ae8136911f29c9a4d71388cee614ff7d082933343c33ae7aae08a53a9,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727202663711810774,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-32fb6863-7fde-481e-85f8-da616d5f9350,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 97eca468-5e05-40a7-88df-652f83af5ade,},Annotations:map[string]string{io.kubernetes.container.
hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9702ba4023af418f321a93e9284d1aab5ba74aac28d8e1361b9829947e2b23ba,PodSandboxId:caddbd42c77a498874e8cef5492c1e27dbca8485a206f7f44835828f8723a596,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727202660337585561,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4137bbf-85db-4e98-85d2-28f5aa2f3dbd,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595
ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c61f9fed696b2646c1424a4d6731f20df0255f7b8b7e8bf54a71bb8971cb8838,PodSandboxId:fec6e4ccbd818f93d3e8031d4b2b6929411c857f2596fd466f37f193a85a28c8,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727202654482774524,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-32fb6863-7fde-481e-85f8-da616d5f9350,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f2db2455-e433-4572-bcb5-76e480b1ffd2,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c303d9afee770ae2eda2d8cb9e029e10d46565d5f87e1aacf30f6dad3e3d41cd,PodSandboxId:2c67c0d05137e0ec73851612ff2185170aa276ea320e2cd8f4a6a0a71ef88192,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727202142413103666,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b9jr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 93deada6-273f-48ac-b9de-c15825530c1f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34d8cf5c7f79b47a8d52423feea5cf5abae4283636ed3e0f86be76b2102c3756,PodSandboxId:e91cbc06f5b7e49e913a67d2645b132a353d67855790fe507fe2cd44c10b28de,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727202138863609908,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996
ff-52zgf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fa50801-2baf-4242-9d1b-2b9f680d5498,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6642ed8d7c9aa2829a8170c360aa3d0cdd4c87a44f08b7a6088af9b6a869c70d,PodSandboxId:7655fd4af0c12638291e37c07ed7931db59cdfd7e954bb54f035c01fa03304ff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha
256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727202114957766742,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8hhkt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d37fb54d-5671-4db7-8e00-385c2d490ff6,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b171331cbfafc2c342a6ac5aba64250fe005cf4f948c34a2e16aa4678afa2ae,PodSandboxId:cceae892b0faf34431b79d0942aac1a7956f8cffd6f573ed5fa218398c06f442,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/in
gress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727202114857420711,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h4fmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3b820a54-d92f-459f-b351-ff53103865df,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e31517907a7d9551c8e9e7375fef9f817b0dae3745b7f7e6a359481276a5fe,PodSandboxId:059713d5f2942e8ede8fc84f87dc8a62a21ac3405d06191bae2cd7462fcdbbd3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727202093707404343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-pkzn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ed5b0c-3307-4c48-b8dc-666848d353fc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3325e1950352a5d322dab40be7ae782548982f9b8dff3baf10e56b97d
ed39d5b,PodSandboxId:badcdf0c164bfe33f7a5771c715392f394abc891483caf237fe325771cb8f74e,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,State:CONTAINER_RUNNING,CreatedAt:1727202089507605436,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-x6wlg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1823d56-3f3b-4741-8de4-5c38ebfb622e,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fa7ef575957e15ae50cd88471a7f48bc5a2e72eec5260f95ac625aa86485bec,PodSandboxId:6f147cb254c2e550f47b9145b62bf7c45356144ed1326ec8a02cc707153aa76b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727202075702435474,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 209a83c9-7b47-44e1-8897-682ab287a114,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.p
orts: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892df4e49ab85492657cc5d1c8404bc9bbdf9a850ff6a877c13f1f0bde448d34,PodSandboxId:d3cf49536a775059e28f2fa79ab3e48e4f327be0dbae35610a556ce41401a89e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727202066494013597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a66ff5-32a5-4cdb-9073-da217f1138f1,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be47175c23bbde37487f56b66baa578e3060c0189a63f4156cdba80a73738ae,PodSandboxId:f51323ffd92af2f11b2fd105dbd855797a82c930e423375749cf00aa81144f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727202064699970678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wbgv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b793eb56-d95a-49f1-8294-1ab4837d5d36,},Annotations:map[string]string{io.kubernetes.
container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05055f26daa39aed87cf7d873b1526912f0d56ac562bd79c217b2c5c135531c3,PodSandboxId:9379757a98736e16ad81c9443b123800a6ebac50f70eacbfaf35713582566dad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727202062391497092,Labels:map[string]string{io.kubernetes.container.name: kube-p
roxy,io.kubernetes.pod.name: kube-proxy-jsjnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07996bfd-1ae9-4e9c-9148-14966458de66,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5872d2d84daecbb7286168900d232aaa3de8d6e2a7efd42d1a21e79e7716fbef,PodSandboxId:a466770b867e2fc0fa51cb8add999d3390779206baa1cb35a345264f62e6c93d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727202051179093922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58f7cca19e2f3ff0b4c700a54c6c183,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aed06020fea288161b94609946ad26fe5fb9c066d4e615a8ce8107d8e36cb5,PodSandboxId:5b729a73d998d0556f4387c9284e0af77999474f4c03ea46ed8185ac3f9119f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727202051180588384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-218885,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: da642d5c94c7507578558cbed0fc241c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45900bdb84120d2b0e5a5dcb15a77d3adc41436c4a8c297983d2dc3c3e33a93,PodSandboxId:f744f09f310f16d040ffd6b58c12920fdfadee9f122e2dcd163800338f47d777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727202051150395297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-218885,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 6442ad70ea7295b7e243b2fa7ca3de8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176b7e7ab3b8a771f88df4a1857f2fe15185c86635661cfc3d36f9a276a729de,PodSandboxId:8363686ba5d19700b174e56d4d3ac206d9a10b4586cc9fe13b9cbed3d0656fa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727202051107614012,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-218885,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: bc93ecbe7d2f9eee0e6aa527b58ce9c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b00cbd3-ce20-43a1-9765-7103b1fa95c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.445995747Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa07eaf4-8bf8-4d47-86b5-0c24d1a50a61 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.446072882Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa07eaf4-8bf8-4d47-86b5-0c24d1a50a61 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.447168034Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=63b0b3f1-a5ee-405e-be31-97c8fc246eab name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.448449600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202698448421457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:536812,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=63b0b3f1-a5ee-405e-be31-97c8fc246eab name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.449052908Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4aba2d25-e651-4fef-a90a-ec8c2a0f1c78 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.449153924Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4aba2d25-e651-4fef-a90a-ec8c2a0f1c78 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.449588410Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6c6ac506dcfcf9cfa1fff11e877c73a7301129a6ecf5e053b30759b7d99cc78,PodSandboxId:c37c20e05e8843140c81b8018052241f2ed0e7c0fbd6e88c98b7ac0e926a9ade,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727202687633342951,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcc9442c-a1e0-46a5-9db8-d027ceac1950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c32a3e70b04b3420083cb712b54eb9c23f2cde598e34340031cd3698291f51e7,PodSandboxId:33cdef3ae8136911f29c9a4d71388cee614ff7d082933343c33ae7aae08a53a9,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727202663711810774,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-32fb6863-7fde-481e-85f8-da616d5f9350,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 97eca468-5e05-40a7-88df-652f83af5ade,},Annotations:map[string]string{io.kubernetes.container.
hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9702ba4023af418f321a93e9284d1aab5ba74aac28d8e1361b9829947e2b23ba,PodSandboxId:caddbd42c77a498874e8cef5492c1e27dbca8485a206f7f44835828f8723a596,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727202660337585561,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4137bbf-85db-4e98-85d2-28f5aa2f3dbd,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595
ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c61f9fed696b2646c1424a4d6731f20df0255f7b8b7e8bf54a71bb8971cb8838,PodSandboxId:fec6e4ccbd818f93d3e8031d4b2b6929411c857f2596fd466f37f193a85a28c8,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727202654482774524,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-32fb6863-7fde-481e-85f8-da616d5f9350,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f2db2455-e433-4572-bcb5-76e480b1ffd2,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c303d9afee770ae2eda2d8cb9e029e10d46565d5f87e1aacf30f6dad3e3d41cd,PodSandboxId:2c67c0d05137e0ec73851612ff2185170aa276ea320e2cd8f4a6a0a71ef88192,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727202142413103666,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b9jr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 93deada6-273f-48ac-b9de-c15825530c1f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34d8cf5c7f79b47a8d52423feea5cf5abae4283636ed3e0f86be76b2102c3756,PodSandboxId:e91cbc06f5b7e49e913a67d2645b132a353d67855790fe507fe2cd44c10b28de,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727202138863609908,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996
ff-52zgf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fa50801-2baf-4242-9d1b-2b9f680d5498,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6642ed8d7c9aa2829a8170c360aa3d0cdd4c87a44f08b7a6088af9b6a869c70d,PodSandboxId:7655fd4af0c12638291e37c07ed7931db59cdfd7e954bb54f035c01fa03304ff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha
256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727202114957766742,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8hhkt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d37fb54d-5671-4db7-8e00-385c2d490ff6,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b171331cbfafc2c342a6ac5aba64250fe005cf4f948c34a2e16aa4678afa2ae,PodSandboxId:cceae892b0faf34431b79d0942aac1a7956f8cffd6f573ed5fa218398c06f442,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/in
gress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727202114857420711,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h4fmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3b820a54-d92f-459f-b351-ff53103865df,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e31517907a7d9551c8e9e7375fef9f817b0dae3745b7f7e6a359481276a5fe,PodSandboxId:059713d5f2942e8ede8fc84f87dc8a62a21ac3405d06191bae2cd7462fcdbbd3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727202093707404343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-pkzn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ed5b0c-3307-4c48-b8dc-666848d353fc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3325e1950352a5d322dab40be7ae782548982f9b8dff3baf10e56b97d
ed39d5b,PodSandboxId:badcdf0c164bfe33f7a5771c715392f394abc891483caf237fe325771cb8f74e,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,State:CONTAINER_RUNNING,CreatedAt:1727202089507605436,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-x6wlg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1823d56-3f3b-4741-8de4-5c38ebfb622e,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fa7ef575957e15ae50cd88471a7f48bc5a2e72eec5260f95ac625aa86485bec,PodSandboxId:6f147cb254c2e550f47b9145b62bf7c45356144ed1326ec8a02cc707153aa76b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727202075702435474,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 209a83c9-7b47-44e1-8897-682ab287a114,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.p
orts: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892df4e49ab85492657cc5d1c8404bc9bbdf9a850ff6a877c13f1f0bde448d34,PodSandboxId:d3cf49536a775059e28f2fa79ab3e48e4f327be0dbae35610a556ce41401a89e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727202066494013597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a66ff5-32a5-4cdb-9073-da217f1138f1,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be47175c23bbde37487f56b66baa578e3060c0189a63f4156cdba80a73738ae,PodSandboxId:f51323ffd92af2f11b2fd105dbd855797a82c930e423375749cf00aa81144f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727202064699970678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wbgv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b793eb56-d95a-49f1-8294-1ab4837d5d36,},Annotations:map[string]string{io.kubernetes.
container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05055f26daa39aed87cf7d873b1526912f0d56ac562bd79c217b2c5c135531c3,PodSandboxId:9379757a98736e16ad81c9443b123800a6ebac50f70eacbfaf35713582566dad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727202062391497092,Labels:map[string]string{io.kubernetes.container.name: kube-p
roxy,io.kubernetes.pod.name: kube-proxy-jsjnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07996bfd-1ae9-4e9c-9148-14966458de66,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5872d2d84daecbb7286168900d232aaa3de8d6e2a7efd42d1a21e79e7716fbef,PodSandboxId:a466770b867e2fc0fa51cb8add999d3390779206baa1cb35a345264f62e6c93d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727202051179093922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58f7cca19e2f3ff0b4c700a54c6c183,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aed06020fea288161b94609946ad26fe5fb9c066d4e615a8ce8107d8e36cb5,PodSandboxId:5b729a73d998d0556f4387c9284e0af77999474f4c03ea46ed8185ac3f9119f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727202051180588384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-218885,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: da642d5c94c7507578558cbed0fc241c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45900bdb84120d2b0e5a5dcb15a77d3adc41436c4a8c297983d2dc3c3e33a93,PodSandboxId:f744f09f310f16d040ffd6b58c12920fdfadee9f122e2dcd163800338f47d777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727202051150395297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-218885,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 6442ad70ea7295b7e243b2fa7ca3de8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176b7e7ab3b8a771f88df4a1857f2fe15185c86635661cfc3d36f9a276a729de,PodSandboxId:8363686ba5d19700b174e56d4d3ac206d9a10b4586cc9fe13b9cbed3d0656fa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727202051107614012,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-218885,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: bc93ecbe7d2f9eee0e6aa527b58ce9c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4aba2d25-e651-4fef-a90a-ec8c2a0f1c78 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.482350574Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7725309-3f62-4762-a622-4b92e117a147 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.482424898Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7725309-3f62-4762-a622-4b92e117a147 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.483480647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69bbbe91-6975-415b-b6f7-86e8a66fa89b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.484779610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202698484750225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:536812,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69bbbe91-6975-415b-b6f7-86e8a66fa89b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.485299184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0245b153-bf83-4de9-88eb-74a5cb937bb8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.485371182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0245b153-bf83-4de9-88eb-74a5cb937bb8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:31:38 addons-218885 crio[662]: time="2024-09-24 18:31:38.485893842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6c6ac506dcfcf9cfa1fff11e877c73a7301129a6ecf5e053b30759b7d99cc78,PodSandboxId:c37c20e05e8843140c81b8018052241f2ed0e7c0fbd6e88c98b7ac0e926a9ade,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727202687633342951,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcc9442c-a1e0-46a5-9db8-d027ceac1950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c32a3e70b04b3420083cb712b54eb9c23f2cde598e34340031cd3698291f51e7,PodSandboxId:33cdef3ae8136911f29c9a4d71388cee614ff7d082933343c33ae7aae08a53a9,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727202663711810774,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-32fb6863-7fde-481e-85f8-da616d5f9350,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 97eca468-5e05-40a7-88df-652f83af5ade,},Annotations:map[string]string{io.kubernetes.container.
hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9702ba4023af418f321a93e9284d1aab5ba74aac28d8e1361b9829947e2b23ba,PodSandboxId:caddbd42c77a498874e8cef5492c1e27dbca8485a206f7f44835828f8723a596,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727202660337585561,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4137bbf-85db-4e98-85d2-28f5aa2f3dbd,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595
ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c61f9fed696b2646c1424a4d6731f20df0255f7b8b7e8bf54a71bb8971cb8838,PodSandboxId:fec6e4ccbd818f93d3e8031d4b2b6929411c857f2596fd466f37f193a85a28c8,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727202654482774524,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-32fb6863-7fde-481e-85f8-da616d5f9350,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f2db2455-e433-4572-bcb5-76e480b1ffd2,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c303d9afee770ae2eda2d8cb9e029e10d46565d5f87e1aacf30f6dad3e3d41cd,PodSandboxId:2c67c0d05137e0ec73851612ff2185170aa276ea320e2cd8f4a6a0a71ef88192,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727202142413103666,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b9jr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 93deada6-273f-48ac-b9de-c15825530c1f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34d8cf5c7f79b47a8d52423feea5cf5abae4283636ed3e0f86be76b2102c3756,PodSandboxId:e91cbc06f5b7e49e913a67d2645b132a353d67855790fe507fe2cd44c10b28de,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727202138863609908,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996
ff-52zgf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fa50801-2baf-4242-9d1b-2b9f680d5498,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6642ed8d7c9aa2829a8170c360aa3d0cdd4c87a44f08b7a6088af9b6a869c70d,PodSandboxId:7655fd4af0c12638291e37c07ed7931db59cdfd7e954bb54f035c01fa03304ff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha
256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727202114957766742,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8hhkt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d37fb54d-5671-4db7-8e00-385c2d490ff6,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b171331cbfafc2c342a6ac5aba64250fe005cf4f948c34a2e16aa4678afa2ae,PodSandboxId:cceae892b0faf34431b79d0942aac1a7956f8cffd6f573ed5fa218398c06f442,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/in
gress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727202114857420711,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h4fmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3b820a54-d92f-459f-b351-ff53103865df,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e31517907a7d9551c8e9e7375fef9f817b0dae3745b7f7e6a359481276a5fe,PodSandboxId:059713d5f2942e8ede8fc84f87dc8a62a21ac3405d06191bae2cd7462fcdbbd3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727202093707404343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-pkzn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ed5b0c-3307-4c48-b8dc-666848d353fc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3325e1950352a5d322dab40be7ae782548982f9b8dff3baf10e56b97d
ed39d5b,PodSandboxId:badcdf0c164bfe33f7a5771c715392f394abc891483caf237fe325771cb8f74e,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,State:CONTAINER_RUNNING,CreatedAt:1727202089507605436,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-x6wlg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1823d56-3f3b-4741-8de4-5c38ebfb622e,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fa7ef575957e15ae50cd88471a7f48bc5a2e72eec5260f95ac625aa86485bec,PodSandboxId:6f147cb254c2e550f47b9145b62bf7c45356144ed1326ec8a02cc707153aa76b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727202075702435474,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 209a83c9-7b47-44e1-8897-682ab287a114,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.p
orts: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892df4e49ab85492657cc5d1c8404bc9bbdf9a850ff6a877c13f1f0bde448d34,PodSandboxId:d3cf49536a775059e28f2fa79ab3e48e4f327be0dbae35610a556ce41401a89e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727202066494013597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a66ff5-32a5-4cdb-9073-da217f1138f1,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be47175c23bbde37487f56b66baa578e3060c0189a63f4156cdba80a73738ae,PodSandboxId:f51323ffd92af2f11b2fd105dbd855797a82c930e423375749cf00aa81144f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727202064699970678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wbgv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b793eb56-d95a-49f1-8294-1ab4837d5d36,},Annotations:map[string]string{io.kubernetes.
container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05055f26daa39aed87cf7d873b1526912f0d56ac562bd79c217b2c5c135531c3,PodSandboxId:9379757a98736e16ad81c9443b123800a6ebac50f70eacbfaf35713582566dad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727202062391497092,Labels:map[string]string{io.kubernetes.container.name: kube-p
roxy,io.kubernetes.pod.name: kube-proxy-jsjnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07996bfd-1ae9-4e9c-9148-14966458de66,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5872d2d84daecbb7286168900d232aaa3de8d6e2a7efd42d1a21e79e7716fbef,PodSandboxId:a466770b867e2fc0fa51cb8add999d3390779206baa1cb35a345264f62e6c93d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727202051179093922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58f7cca19e2f3ff0b4c700a54c6c183,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aed06020fea288161b94609946ad26fe5fb9c066d4e615a8ce8107d8e36cb5,PodSandboxId:5b729a73d998d0556f4387c9284e0af77999474f4c03ea46ed8185ac3f9119f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727202051180588384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-218885,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: da642d5c94c7507578558cbed0fc241c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45900bdb84120d2b0e5a5dcb15a77d3adc41436c4a8c297983d2dc3c3e33a93,PodSandboxId:f744f09f310f16d040ffd6b58c12920fdfadee9f122e2dcd163800338f47d777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727202051150395297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-218885,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 6442ad70ea7295b7e243b2fa7ca3de8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176b7e7ab3b8a771f88df4a1857f2fe15185c86635661cfc3d36f9a276a729de,PodSandboxId:8363686ba5d19700b174e56d4d3ac206d9a10b4586cc9fe13b9cbed3d0656fa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727202051107614012,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-218885,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: bc93ecbe7d2f9eee0e6aa527b58ce9c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0245b153-bf83-4de9-88eb-74a5cb937bb8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d6c6ac506dcfc       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              10 seconds ago      Running             nginx                     0                   c37c20e05e884       nginx
	c32a3e70b04b3       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                             34 seconds ago      Exited              helper-pod                0                   33cdef3ae8136       helper-pod-delete-pvc-32fb6863-7fde-481e-85f8-da616d5f9350
	9702ba4023af4       docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f                            38 seconds ago      Exited              busybox                   0                   caddbd42c77a4       test-local-path
	c61f9fed696b2       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                            44 seconds ago      Exited              helper-pod                0                   fec6e4ccbd818       helper-pod-create-pvc-32fb6863-7fde-481e-85f8-da616d5f9350
	c303d9afee770       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago       Running             gcp-auth                  0                   2c67c0d05137e       gcp-auth-89d5ffd79-b9jr2
	34d8cf5c7f79b       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago       Running             controller                0                   e91cbc06f5b7e       ingress-nginx-controller-bc57996ff-52zgf
	6642ed8d7c9aa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              patch                     0                   7655fd4af0c12       ingress-nginx-admission-patch-8hhkt
	2b171331cbfaf       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              create                    0                   cceae892b0faf       ingress-nginx-admission-create-h4fmh
	70e31517907a7       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        10 minutes ago      Running             metrics-server            0                   059713d5f2942       metrics-server-84c5f94fbc-pkzn4
	3325e1950352a       gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf               10 minutes ago      Running             cloud-spanner-emulator    0                   badcdf0c164bf       cloud-spanner-emulator-5b584cc74-x6wlg
	4fa7ef575957e       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns      0                   6f147cb254c2e       kube-ingress-dns-minikube
	892df4e49ab85       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner       0                   d3cf49536a775       storage-provisioner
	7be47175c23bb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             10 minutes ago      Running             coredns                   0                   f51323ffd92af       coredns-7c65d6cfc9-wbgv9
	05055f26daa39       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             10 minutes ago      Running             kube-proxy                0                   9379757a98736       kube-proxy-jsjnj
	01aed06020fea       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             10 minutes ago      Running             etcd                      0                   5b729a73d998d       etcd-addons-218885
	5872d2d84daec       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             10 minutes ago      Running             kube-scheduler            0                   a466770b867e2       kube-scheduler-addons-218885
	b45900bdb8412       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             10 minutes ago      Running             kube-apiserver            0                   f744f09f310f1       kube-apiserver-addons-218885
	176b7e7ab3b8a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             10 minutes ago      Running             kube-controller-manager   0                   8363686ba5d19       kube-controller-manager-addons-218885
	
	
	==> coredns [7be47175c23bbde37487f56b66baa578e3060c0189a63f4156cdba80a73738ae] <==
	Trace[220166093]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:21:35.643)
	Trace[220166093]: [30.000976871s] [30.000976871s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37772 - 57440 "HINFO IN 3713161987249755073.4462496746838402409. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019354444s
	[INFO] 10.244.0.7:47836 - 60886 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000305804s
	[INFO] 10.244.0.7:47836 - 53458 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110924s
	[INFO] 10.244.0.7:54106 - 25653 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000141925s
	[INFO] 10.244.0.7:54106 - 53304 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000103244s
	[INFO] 10.244.0.7:50681 - 41048 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104188s
	[INFO] 10.244.0.7:50681 - 16991 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007015s
	[INFO] 10.244.0.7:52606 - 52990 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072691s
	[INFO] 10.244.0.7:52606 - 39420 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000217542s
	[INFO] 10.244.0.7:60763 - 62338 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000049324s
	[INFO] 10.244.0.7:60763 - 35968 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000041375s
	[INFO] 10.244.0.21:36769 - 28384 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000276065s
	[INFO] 10.244.0.21:57042 - 58510 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000133543s
	[INFO] 10.244.0.21:58980 - 33022 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000095524s
	[INFO] 10.244.0.21:60903 - 3777 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000078686s
	[INFO] 10.244.0.21:36852 - 36641 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00007641s
	[INFO] 10.244.0.21:41780 - 8788 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082769s
	[INFO] 10.244.0.21:33291 - 39478 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000959689s
	[INFO] 10.244.0.21:40331 - 57937 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000913308s
	
	
	==> describe nodes <==
	Name:               addons-218885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-218885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=addons-218885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T18_20_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-218885
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:20:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-218885
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:31:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:31:30 +0000   Tue, 24 Sep 2024 18:20:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:31:30 +0000   Tue, 24 Sep 2024 18:20:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:31:30 +0000   Tue, 24 Sep 2024 18:20:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:31:30 +0000   Tue, 24 Sep 2024 18:20:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    addons-218885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a62f96b82b1423cb3ca4a7e749331c6
	  System UUID:                5a62f96b-82b1-423c-b3ca-4a7e749331c6
	  Boot ID:                    98ef14c8-41cc-4a65-8db8-db6c1413a40a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     cloud-spanner-emulator-5b584cc74-x6wlg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  gcp-auth                    gcp-auth-89d5ffd79-b9jr2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-52zgf    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-wbgv9                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-218885                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-218885                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-218885       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-jsjnj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-218885                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-pkzn4             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-218885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-218885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-218885 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-218885 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-218885 event: Registered Node addons-218885 in Controller
	
	
	==> dmesg <==
	[Sep24 18:21] systemd-fstab-generator[1338]: Ignoring "noauto" option for root device
	[  +1.205129] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.016304] kauditd_printk_skb: 142 callbacks suppressed
	[  +6.289241] kauditd_printk_skb: 132 callbacks suppressed
	[ +10.551408] kauditd_printk_skb: 15 callbacks suppressed
	[ +15.866552] kauditd_printk_skb: 11 callbacks suppressed
	[ +13.601209] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.093860] kauditd_printk_skb: 38 callbacks suppressed
	[Sep24 18:22] kauditd_printk_skb: 80 callbacks suppressed
	[  +6.788661] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.792651] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.642866] kauditd_printk_skb: 43 callbacks suppressed
	[  +7.574267] kauditd_printk_skb: 3 callbacks suppressed
	[Sep24 18:23] kauditd_printk_skb: 28 callbacks suppressed
	[Sep24 18:24] kauditd_printk_skb: 28 callbacks suppressed
	[Sep24 18:27] kauditd_printk_skb: 28 callbacks suppressed
	[Sep24 18:30] kauditd_printk_skb: 28 callbacks suppressed
	[ +12.411074] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.529390] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.909274] kauditd_printk_skb: 20 callbacks suppressed
	[Sep24 18:31] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.163684] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.109651] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.771905] kauditd_printk_skb: 6 callbacks suppressed
	[ +10.406957] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [01aed06020fea288161b94609946ad26fe5fb9c066d4e615a8ce8107d8e36cb5] <==
	{"level":"info","ts":"2024-09-24T18:22:03.062403Z","caller":"traceutil/trace.go:171","msg":"trace[1469133074] linearizableReadLoop","detail":"{readStateIndex:1073; appliedIndex:1072; }","duration":"188.95843ms","start":"2024-09-24T18:22:02.873430Z","end":"2024-09-24T18:22:03.062389Z","steps":["trace[1469133074] 'read index received'  (duration: 184.905565ms)","trace[1469133074] 'applied index is now lower than readState.Index'  (duration: 4.052224ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T18:22:03.062928Z","caller":"traceutil/trace.go:171","msg":"trace[2098326213] transaction","detail":"{read_only:false; response_revision:1044; number_of_response:1; }","duration":"283.243224ms","start":"2024-09-24T18:22:02.779454Z","end":"2024-09-24T18:22:03.062698Z","steps":["trace[2098326213] 'process raft request'  (duration: 282.828061ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:22:03.063637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.191701ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:22:03.063713Z","caller":"traceutil/trace.go:171","msg":"trace[936037991] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1044; }","duration":"190.244747ms","start":"2024-09-24T18:22:02.873426Z","end":"2024-09-24T18:22:03.063671Z","steps":["trace[936037991] 'agreement among raft nodes before linearized reading'  (duration: 190.034984ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:22:17.577995Z","caller":"traceutil/trace.go:171","msg":"trace[1652387929] linearizableReadLoop","detail":"{readStateIndex:1137; appliedIndex:1136; }","duration":"205.221262ms","start":"2024-09-24T18:22:17.372753Z","end":"2024-09-24T18:22:17.577975Z","steps":["trace[1652387929] 'read index received'  (duration: 205.01138ms)","trace[1652387929] 'applied index is now lower than readState.Index'  (duration: 209.231µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T18:22:17.578361Z","caller":"traceutil/trace.go:171","msg":"trace[1017875941] transaction","detail":"{read_only:false; response_revision:1105; number_of_response:1; }","duration":"420.198995ms","start":"2024-09-24T18:22:17.158143Z","end":"2024-09-24T18:22:17.578342Z","steps":["trace[1017875941] 'process raft request'  (duration: 419.668298ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:22:17.578543Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T18:22:17.158125Z","time spent":"420.358099ms","remote":"127.0.0.1:37844","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1101 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-24T18:22:17.578615Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.164866ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:22:17.578661Z","caller":"traceutil/trace.go:171","msg":"trace[1038728676] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"104.208566ms","start":"2024-09-24T18:22:17.474443Z","end":"2024-09-24T18:22:17.578651Z","steps":["trace[1038728676] 'agreement among raft nodes before linearized reading'  (duration: 104.147281ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:22:17.578424Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.649301ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:22:17.578829Z","caller":"traceutil/trace.go:171","msg":"trace[2133008651] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"206.08201ms","start":"2024-09-24T18:22:17.372738Z","end":"2024-09-24T18:22:17.578820Z","steps":["trace[2133008651] 'agreement among raft nodes before linearized reading'  (duration: 205.618799ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:30:51.489118Z","caller":"traceutil/trace.go:171","msg":"trace[2026224945] linearizableReadLoop","detail":"{readStateIndex:2186; appliedIndex:2185; }","duration":"263.239917ms","start":"2024-09-24T18:30:51.225863Z","end":"2024-09-24T18:30:51.489103Z","steps":["trace[2026224945] 'read index received'  (duration: 263.063528ms)","trace[2026224945] 'applied index is now lower than readState.Index'  (duration: 175.847µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T18:30:51.489403Z","caller":"traceutil/trace.go:171","msg":"trace[588894328] transaction","detail":"{read_only:false; response_revision:2041; number_of_response:1; }","duration":"264.853791ms","start":"2024-09-24T18:30:51.224537Z","end":"2024-09-24T18:30:51.489390Z","steps":["trace[588894328] 'process raft request'  (duration: 264.431123ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:30:51.489597Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.718428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/registry-test.17f841a5c5f6a88e\" ","response":"range_response_count:1 size:727"}
	{"level":"info","ts":"2024-09-24T18:30:51.489618Z","caller":"traceutil/trace.go:171","msg":"trace[141298160] range","detail":"{range_begin:/registry/events/default/registry-test.17f841a5c5f6a88e; range_end:; response_count:1; response_revision:2041; }","duration":"263.752191ms","start":"2024-09-24T18:30:51.225860Z","end":"2024-09-24T18:30:51.489612Z","steps":["trace[141298160] 'agreement among raft nodes before linearized reading'  (duration: 263.661857ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:30:51.489708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.1176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:30:51.489721Z","caller":"traceutil/trace.go:171","msg":"trace[905351139] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2041; }","duration":"149.1321ms","start":"2024-09-24T18:30:51.340585Z","end":"2024-09-24T18:30:51.489717Z","steps":["trace[905351139] 'agreement among raft nodes before linearized reading'  (duration: 149.10741ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:30:52.421183Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1524}
	{"level":"info","ts":"2024-09-24T18:30:52.455466Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1524,"took":"33.86366ms","hash":2015250619,"current-db-size-bytes":6524928,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3493888,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-24T18:30:52.455576Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2015250619,"revision":1524,"compact-revision":-1}
	{"level":"info","ts":"2024-09-24T18:31:11.923147Z","caller":"traceutil/trace.go:171","msg":"trace[1594584679] linearizableReadLoop","detail":"{readStateIndex:2418; appliedIndex:2417; }","duration":"180.184095ms","start":"2024-09-24T18:31:11.742947Z","end":"2024-09-24T18:31:11.923131Z","steps":["trace[1594584679] 'read index received'  (duration: 179.421282ms)","trace[1594584679] 'applied index is now lower than readState.Index'  (duration: 762.207µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T18:31:11.923267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.306652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-external-health-monitor-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:31:11.923304Z","caller":"traceutil/trace.go:171","msg":"trace[669207299] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-external-health-monitor-controller; range_end:; response_count:0; response_revision:2266; }","duration":"180.352098ms","start":"2024-09-24T18:31:11.742942Z","end":"2024-09-24T18:31:11.923294Z","steps":["trace[669207299] 'agreement among raft nodes before linearized reading'  (duration: 180.263747ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:31:11.923415Z","caller":"traceutil/trace.go:171","msg":"trace[1440776756] transaction","detail":"{read_only:false; response_revision:2266; number_of_response:1; }","duration":"324.268386ms","start":"2024-09-24T18:31:11.599140Z","end":"2024-09-24T18:31:11.923409Z","steps":["trace[1440776756] 'process raft request'  (duration: 323.264023ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:31:11.923487Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T18:31:11.599123Z","time spent":"324.320186ms","remote":"127.0.0.1:38080","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":696,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/csinodes/addons-218885\" mod_revision:1071 > success:<request_put:<key:\"/registry/csinodes/addons-218885\" value_size:656 >> failure:<request_range:<key:\"/registry/csinodes/addons-218885\" > >"}
	
	
	==> gcp-auth [c303d9afee770ae2eda2d8cb9e029e10d46565d5f87e1aacf30f6dad3e3d41cd] <==
	2024/09/24 18:22:22 GCP Auth Webhook started!
	2024/09/24 18:22:24 Ready to marshal response ...
	2024/09/24 18:22:24 Ready to write response ...
	2024/09/24 18:22:24 Ready to marshal response ...
	2024/09/24 18:22:24 Ready to write response ...
	2024/09/24 18:22:24 Ready to marshal response ...
	2024/09/24 18:22:24 Ready to write response ...
	2024/09/24 18:30:36 Ready to marshal response ...
	2024/09/24 18:30:36 Ready to write response ...
	2024/09/24 18:30:45 Ready to marshal response ...
	2024/09/24 18:30:45 Ready to write response ...
	2024/09/24 18:30:50 Ready to marshal response ...
	2024/09/24 18:30:50 Ready to write response ...
	2024/09/24 18:30:50 Ready to marshal response ...
	2024/09/24 18:30:50 Ready to write response ...
	2024/09/24 18:31:02 Ready to marshal response ...
	2024/09/24 18:31:02 Ready to write response ...
	2024/09/24 18:31:03 Ready to marshal response ...
	2024/09/24 18:31:03 Ready to write response ...
	2024/09/24 18:31:23 Ready to marshal response ...
	2024/09/24 18:31:23 Ready to write response ...
	
	
	==> kernel <==
	 18:31:38 up 11 min,  0 users,  load average: 1.98, 0.89, 0.57
	Linux addons-218885 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b45900bdb84120d2b0e5a5dcb15a77d3adc41436c4a8c297983d2dc3c3e33a93] <==
	I0924 18:31:17.500554       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0924 18:31:17.524607       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0924 18:31:17.524688       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0924 18:31:17.549570       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0924 18:31:17.551996       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0924 18:31:17.583859       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0924 18:31:17.583989       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0924 18:31:17.635304       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0924 18:31:17.635349       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0924 18:31:18.584648       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0924 18:31:18.636057       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0924 18:31:18.663553       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E0924 18:31:19.176932       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0924 18:31:23.694509       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0924 18:31:23.876326       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.166.187"}
	E0924 18:31:24.791416       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:25.798413       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:26.805249       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:27.812198       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:28.819268       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:29.826573       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:30.833568       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:31.840586       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:32.848358       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:33.854844       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [176b7e7ab3b8a771f88df4a1857f2fe15185c86635661cfc3d36f9a276a729de] <==
	E0924 18:31:22.078498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:22.499159       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:22.499268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:22.613742       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:22.613845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:26.981610       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:26.981646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:27.096822       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:27.096870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:27.455338       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:27.455376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:27.464969       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:27.465017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0924 18:31:30.087204       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-218885"
	I0924 18:31:30.836206       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0924 18:31:30.836376       1 shared_informer.go:320] Caches are synced for resource quota
	I0924 18:31:31.278939       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0924 18:31:31.278983       1 shared_informer.go:320] Caches are synced for garbage collector
	W0924 18:31:36.592677       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:36.592810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0924 18:31:37.392285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="7.583µs"
	W0924 18:31:37.858012       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:37.858063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:38.178445       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:38.178483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [05055f26daa39aed87cf7d873b1526912f0d56ac562bd79c217b2c5c135531c3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 18:21:04.310826       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 18:21:04.382306       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.215"]
	E0924 18:21:04.382374       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 18:21:05.227657       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 18:21:05.227715       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 18:21:05.227740       1 server_linux.go:169] "Using iptables Proxier"
	I0924 18:21:05.641037       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 18:21:05.641385       1 server.go:483] "Version info" version="v1.31.1"
	I0924 18:21:05.641397       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:21:05.651378       1 config.go:199] "Starting service config controller"
	I0924 18:21:05.651407       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 18:21:05.651432       1 config.go:105] "Starting endpoint slice config controller"
	I0924 18:21:05.651436       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 18:21:05.651985       1 config.go:328] "Starting node config controller"
	I0924 18:21:05.651993       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 18:21:05.751639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 18:21:05.751676       1 shared_informer.go:320] Caches are synced for service config
	I0924 18:21:05.760267       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5872d2d84daecbb7286168900d232aaa3de8d6e2a7efd42d1a21e79e7716fbef] <==
	W0924 18:20:53.710329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 18:20:53.710356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:53.710416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 18:20:53.710442       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:53.710520       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 18:20:53.710546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:53.712017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 18:20:53.712081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.523135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 18:20:54.523250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.558496       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 18:20:54.558598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.602148       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 18:20:54.602194       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 18:20:54.615690       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 18:20:54.616117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.623597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0924 18:20:54.623684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.642634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 18:20:54.643012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.652972       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 18:20:54.653082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.764823       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0924 18:20:54.764896       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0924 18:20:57.293683       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 18:31:34 addons-218885 kubelet[1212]: I0924 18:31:34.297602    1212 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76872e748e631f49e4c120804280945e1dc71eace209c2f2c65f1596b4b03c7d"} err="failed to get container status \"76872e748e631f49e4c120804280945e1dc71eace209c2f2c65f1596b4b03c7d\": rpc error: code = NotFound desc = could not find container \"76872e748e631f49e4c120804280945e1dc71eace209c2f2c65f1596b4b03c7d\": container with ID starting with 76872e748e631f49e4c120804280945e1dc71eace209c2f2c65f1596b4b03c7d not found: ID does not exist"
	Sep 24 18:31:36 addons-218885 kubelet[1212]: I0924 18:31:36.221036    1212 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46d49d6a-ad04-42f9-83fb-0c617b06d97a" path="/var/lib/kubelet/pods/46d49d6a-ad04-42f9-83fb-0c617b06d97a/volumes"
	Sep 24 18:31:36 addons-218885 kubelet[1212]: E0924 18:31:36.856512    1212 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202696856069792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:536812,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:31:36 addons-218885 kubelet[1212]: E0924 18:31:36.856535    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202696856069792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:536812,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:31:37 addons-218885 kubelet[1212]: I0924 18:31:37.059702    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/62e281b3-827c-40ee-8ad0-ade64b0eed02-gcp-creds\") pod \"62e281b3-827c-40ee-8ad0-ade64b0eed02\" (UID: \"62e281b3-827c-40ee-8ad0-ade64b0eed02\") "
	Sep 24 18:31:37 addons-218885 kubelet[1212]: I0924 18:31:37.059758    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdk2x\" (UniqueName: \"kubernetes.io/projected/62e281b3-827c-40ee-8ad0-ade64b0eed02-kube-api-access-hdk2x\") pod \"62e281b3-827c-40ee-8ad0-ade64b0eed02\" (UID: \"62e281b3-827c-40ee-8ad0-ade64b0eed02\") "
	Sep 24 18:31:37 addons-218885 kubelet[1212]: I0924 18:31:37.060392    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62e281b3-827c-40ee-8ad0-ade64b0eed02-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "62e281b3-827c-40ee-8ad0-ade64b0eed02" (UID: "62e281b3-827c-40ee-8ad0-ade64b0eed02"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 24 18:31:37 addons-218885 kubelet[1212]: I0924 18:31:37.063430    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62e281b3-827c-40ee-8ad0-ade64b0eed02-kube-api-access-hdk2x" (OuterVolumeSpecName: "kube-api-access-hdk2x") pod "62e281b3-827c-40ee-8ad0-ade64b0eed02" (UID: "62e281b3-827c-40ee-8ad0-ade64b0eed02"). InnerVolumeSpecName "kube-api-access-hdk2x". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 24 18:31:37 addons-218885 kubelet[1212]: I0924 18:31:37.160796    1212 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/62e281b3-827c-40ee-8ad0-ade64b0eed02-gcp-creds\") on node \"addons-218885\" DevicePath \"\""
	Sep 24 18:31:37 addons-218885 kubelet[1212]: I0924 18:31:37.160826    1212 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hdk2x\" (UniqueName: \"kubernetes.io/projected/62e281b3-827c-40ee-8ad0-ade64b0eed02-kube-api-access-hdk2x\") on node \"addons-218885\" DevicePath \"\""
	Sep 24 18:31:37 addons-218885 kubelet[1212]: I0924 18:31:37.765046    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pshn7\" (UniqueName: \"kubernetes.io/projected/bb39eff0-510f-4e28-b3b7-a246e7ca880c-kube-api-access-pshn7\") pod \"bb39eff0-510f-4e28-b3b7-a246e7ca880c\" (UID: \"bb39eff0-510f-4e28-b3b7-a246e7ca880c\") "
	Sep 24 18:31:37 addons-218885 kubelet[1212]: I0924 18:31:37.769632    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb39eff0-510f-4e28-b3b7-a246e7ca880c-kube-api-access-pshn7" (OuterVolumeSpecName: "kube-api-access-pshn7") pod "bb39eff0-510f-4e28-b3b7-a246e7ca880c" (UID: "bb39eff0-510f-4e28-b3b7-a246e7ca880c"). InnerVolumeSpecName "kube-api-access-pshn7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 24 18:31:37 addons-218885 kubelet[1212]: I0924 18:31:37.866684    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szrpw\" (UniqueName: \"kubernetes.io/projected/e715cd68-83d0-4850-abc2-b9a3f139e6f8-kube-api-access-szrpw\") pod \"e715cd68-83d0-4850-abc2-b9a3f139e6f8\" (UID: \"e715cd68-83d0-4850-abc2-b9a3f139e6f8\") "
	Sep 24 18:31:37 addons-218885 kubelet[1212]: I0924 18:31:37.866808    1212 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pshn7\" (UniqueName: \"kubernetes.io/projected/bb39eff0-510f-4e28-b3b7-a246e7ca880c-kube-api-access-pshn7\") on node \"addons-218885\" DevicePath \"\""
	Sep 24 18:31:37 addons-218885 kubelet[1212]: I0924 18:31:37.869032    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e715cd68-83d0-4850-abc2-b9a3f139e6f8-kube-api-access-szrpw" (OuterVolumeSpecName: "kube-api-access-szrpw") pod "e715cd68-83d0-4850-abc2-b9a3f139e6f8" (UID: "e715cd68-83d0-4850-abc2-b9a3f139e6f8"). InnerVolumeSpecName "kube-api-access-szrpw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 24 18:31:37 addons-218885 kubelet[1212]: I0924 18:31:37.967695    1212 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-szrpw\" (UniqueName: \"kubernetes.io/projected/e715cd68-83d0-4850-abc2-b9a3f139e6f8-kube-api-access-szrpw\") on node \"addons-218885\" DevicePath \"\""
	Sep 24 18:31:38 addons-218885 kubelet[1212]: I0924 18:31:38.221592    1212 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62e281b3-827c-40ee-8ad0-ade64b0eed02" path="/var/lib/kubelet/pods/62e281b3-827c-40ee-8ad0-ade64b0eed02/volumes"
	Sep 24 18:31:38 addons-218885 kubelet[1212]: I0924 18:31:38.312264    1212 scope.go:117] "RemoveContainer" containerID="abee0f70c7e7a4b2e69e686d35c1ba7b67cdd6de6a260c09a10351d6d577917d"
	Sep 24 18:31:38 addons-218885 kubelet[1212]: I0924 18:31:38.349909    1212 scope.go:117] "RemoveContainer" containerID="abee0f70c7e7a4b2e69e686d35c1ba7b67cdd6de6a260c09a10351d6d577917d"
	Sep 24 18:31:38 addons-218885 kubelet[1212]: E0924 18:31:38.350463    1212 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abee0f70c7e7a4b2e69e686d35c1ba7b67cdd6de6a260c09a10351d6d577917d\": container with ID starting with abee0f70c7e7a4b2e69e686d35c1ba7b67cdd6de6a260c09a10351d6d577917d not found: ID does not exist" containerID="abee0f70c7e7a4b2e69e686d35c1ba7b67cdd6de6a260c09a10351d6d577917d"
	Sep 24 18:31:38 addons-218885 kubelet[1212]: I0924 18:31:38.350493    1212 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abee0f70c7e7a4b2e69e686d35c1ba7b67cdd6de6a260c09a10351d6d577917d"} err="failed to get container status \"abee0f70c7e7a4b2e69e686d35c1ba7b67cdd6de6a260c09a10351d6d577917d\": rpc error: code = NotFound desc = could not find container \"abee0f70c7e7a4b2e69e686d35c1ba7b67cdd6de6a260c09a10351d6d577917d\": container with ID starting with abee0f70c7e7a4b2e69e686d35c1ba7b67cdd6de6a260c09a10351d6d577917d not found: ID does not exist"
	Sep 24 18:31:38 addons-218885 kubelet[1212]: I0924 18:31:38.350514    1212 scope.go:117] "RemoveContainer" containerID="8a204fd3c4d8e53692582534c6f702196abf1467b8a97036f2350d701d54f308"
	Sep 24 18:31:38 addons-218885 kubelet[1212]: I0924 18:31:38.379170    1212 scope.go:117] "RemoveContainer" containerID="8a204fd3c4d8e53692582534c6f702196abf1467b8a97036f2350d701d54f308"
	Sep 24 18:31:38 addons-218885 kubelet[1212]: E0924 18:31:38.379701    1212 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a204fd3c4d8e53692582534c6f702196abf1467b8a97036f2350d701d54f308\": container with ID starting with 8a204fd3c4d8e53692582534c6f702196abf1467b8a97036f2350d701d54f308 not found: ID does not exist" containerID="8a204fd3c4d8e53692582534c6f702196abf1467b8a97036f2350d701d54f308"
	Sep 24 18:31:38 addons-218885 kubelet[1212]: I0924 18:31:38.379733    1212 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a204fd3c4d8e53692582534c6f702196abf1467b8a97036f2350d701d54f308"} err="failed to get container status \"8a204fd3c4d8e53692582534c6f702196abf1467b8a97036f2350d701d54f308\": rpc error: code = NotFound desc = could not find container \"8a204fd3c4d8e53692582534c6f702196abf1467b8a97036f2350d701d54f308\": container with ID starting with 8a204fd3c4d8e53692582534c6f702196abf1467b8a97036f2350d701d54f308 not found: ID does not exist"
	
	
	==> storage-provisioner [892df4e49ab85492657cc5d1c8404bc9bbdf9a850ff6a877c13f1f0bde448d34] <==
	I0924 18:21:07.075958       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 18:21:07.205644       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 18:21:07.205811       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 18:21:07.484816       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 18:21:07.489301       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-218885_6633e999-b40f-40f8-8839-f401b4cb474f!
	I0924 18:21:07.503782       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa2296f7-92f6-4a3d-97ef-5ea843d9a5be", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-218885_6633e999-b40f-40f8-8839-f401b4cb474f became leader
	I0924 18:21:07.594117       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-218885_6633e999-b40f-40f8-8839-f401b4cb474f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-218885 -n addons-218885
helpers_test.go:261: (dbg) Run:  kubectl --context addons-218885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-h4fmh ingress-nginx-admission-patch-8hhkt
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-218885 describe pod busybox ingress-nginx-admission-create-h4fmh ingress-nginx-admission-patch-8hhkt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-218885 describe pod busybox ingress-nginx-admission-create-h4fmh ingress-nginx-admission-patch-8hhkt: exit status 1 (65.603214ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-218885/192.168.39.215
	Start Time:       Tue, 24 Sep 2024 18:22:24 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z5n6g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-z5n6g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m15s                  default-scheduler  Successfully assigned default/busybox to addons-218885
	  Normal   Pulling    7m42s (x4 over 9m15s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m42s (x4 over 9m15s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m42s (x4 over 9m15s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m31s (x6 over 9m14s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m5s (x21 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-h4fmh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8hhkt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-218885 describe pod busybox ingress-nginx-admission-create-h4fmh ingress-nginx-admission-patch-8hhkt: exit status 1
--- FAIL: TestAddons/parallel/Registry (72.93s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (152.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-218885 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-218885 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-218885 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [dcc9442c-a1e0-46a5-9db8-d027ceac1950] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [dcc9442c-a1e0-46a5-9db8-d027ceac1950] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003995774s
I0924 18:31:33.916966   10949 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-218885 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-218885 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.793006411s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-218885 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-218885 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.215
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-218885 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-218885 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-218885 addons disable ingress --alsologtostderr -v=1: (7.648609251s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-218885 -n addons-218885
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-218885 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-218885 logs -n 25: (1.155981926s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| delete  | -p download-only-880989                                                                     | download-only-880989 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| delete  | -p download-only-366438                                                                     | download-only-366438 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| delete  | -p download-only-880989                                                                     | download-only-880989 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-303583 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | binary-mirror-303583                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40655                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-303583                                                                     | binary-mirror-303583 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| addons  | enable dashboard -p                                                                         | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | addons-218885                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | addons-218885                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-218885 --wait=true                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:22 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-218885 addons disable                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:30 UTC | 24 Sep 24 18:30 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:30 UTC | 24 Sep 24 18:30 UTC |
	|         | addons-218885                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-218885 ssh cat                                                                       | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | /opt/local-path-provisioner/pvc-32fb6863-7fde-481e-85f8-da616d5f9350_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-218885 addons disable                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-218885 addons                                                                        | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-218885 addons                                                                        | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | -p addons-218885                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-218885 ssh curl -s                                                                   | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-218885 ip                                                                            | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	| addons  | addons-218885 addons disable                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | addons-218885                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | -p addons-218885                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-218885 addons disable                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-218885 ip                                                                            | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:33 UTC | 24 Sep 24 18:33 UTC |
	| addons  | addons-218885 addons disable                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:33 UTC | 24 Sep 24 18:33 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-218885 addons disable                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:33 UTC | 24 Sep 24 18:33 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:20:12
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:20:12.325736   11602 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:20:12.325986   11602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:20:12.325997   11602 out.go:358] Setting ErrFile to fd 2...
	I0924 18:20:12.326003   11602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:20:12.326193   11602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 18:20:12.326790   11602 out.go:352] Setting JSON to false
	I0924 18:20:12.327640   11602 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":163,"bootTime":1727201849,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:20:12.327726   11602 start.go:139] virtualization: kvm guest
	I0924 18:20:12.329631   11602 out.go:177] * [addons-218885] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 18:20:12.331012   11602 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:20:12.331079   11602 notify.go:220] Checking for updates...
	I0924 18:20:12.333440   11602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:20:12.334628   11602 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:20:12.335823   11602 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:20:12.337065   11602 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 18:20:12.338153   11602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:20:12.339404   11602 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:20:12.370285   11602 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 18:20:12.371583   11602 start.go:297] selected driver: kvm2
	I0924 18:20:12.371597   11602 start.go:901] validating driver "kvm2" against <nil>
	I0924 18:20:12.371608   11602 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:20:12.372940   11602 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:20:12.373043   11602 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 18:20:12.393549   11602 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 18:20:12.393593   11602 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 18:20:12.393793   11602 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:20:12.393823   11602 cni.go:84] Creating CNI manager for ""
	I0924 18:20:12.393846   11602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 18:20:12.393854   11602 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 18:20:12.393894   11602 start.go:340] cluster config:
	{Name:addons-218885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-218885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:20:12.393973   11602 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:20:12.395768   11602 out.go:177] * Starting "addons-218885" primary control-plane node in "addons-218885" cluster
	I0924 18:20:12.396963   11602 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:20:12.396994   11602 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 18:20:12.397002   11602 cache.go:56] Caching tarball of preloaded images
	I0924 18:20:12.397076   11602 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 18:20:12.397086   11602 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 18:20:12.397361   11602 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/config.json ...
	I0924 18:20:12.397381   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/config.json: {Name:mk8ae020c4167ae6b07f3b581ad7b941f00493e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:12.397501   11602 start.go:360] acquireMachinesLock for addons-218885: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 18:20:12.397544   11602 start.go:364] duration metric: took 30.473µs to acquireMachinesLock for "addons-218885"
	I0924 18:20:12.397560   11602 start.go:93] Provisioning new machine with config: &{Name:addons-218885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-218885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:20:12.397621   11602 start.go:125] createHost starting for "" (driver="kvm2")
	I0924 18:20:12.399224   11602 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0924 18:20:12.399337   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:20:12.399361   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:20:12.413485   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I0924 18:20:12.413984   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:20:12.414522   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:20:12.414543   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:20:12.414994   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:20:12.415195   11602 main.go:141] libmachine: (addons-218885) Calling .GetMachineName
	I0924 18:20:12.415361   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:12.415550   11602 start.go:159] libmachine.API.Create for "addons-218885" (driver="kvm2")
	I0924 18:20:12.415574   11602 client.go:168] LocalClient.Create starting
	I0924 18:20:12.415623   11602 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem
	I0924 18:20:12.521230   11602 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem
	I0924 18:20:12.771341   11602 main.go:141] libmachine: Running pre-create checks...
	I0924 18:20:12.771362   11602 main.go:141] libmachine: (addons-218885) Calling .PreCreateCheck
	I0924 18:20:12.771809   11602 main.go:141] libmachine: (addons-218885) Calling .GetConfigRaw
	I0924 18:20:12.772210   11602 main.go:141] libmachine: Creating machine...
	I0924 18:20:12.772225   11602 main.go:141] libmachine: (addons-218885) Calling .Create
	I0924 18:20:12.772358   11602 main.go:141] libmachine: (addons-218885) Creating KVM machine...
	I0924 18:20:12.773495   11602 main.go:141] libmachine: (addons-218885) DBG | found existing default KVM network
	I0924 18:20:12.774264   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:12.774133   11624 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0924 18:20:12.774303   11602 main.go:141] libmachine: (addons-218885) DBG | created network xml: 
	I0924 18:20:12.774319   11602 main.go:141] libmachine: (addons-218885) DBG | <network>
	I0924 18:20:12.774325   11602 main.go:141] libmachine: (addons-218885) DBG |   <name>mk-addons-218885</name>
	I0924 18:20:12.774334   11602 main.go:141] libmachine: (addons-218885) DBG |   <dns enable='no'/>
	I0924 18:20:12.774360   11602 main.go:141] libmachine: (addons-218885) DBG |   
	I0924 18:20:12.774381   11602 main.go:141] libmachine: (addons-218885) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0924 18:20:12.774452   11602 main.go:141] libmachine: (addons-218885) DBG |     <dhcp>
	I0924 18:20:12.774493   11602 main.go:141] libmachine: (addons-218885) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0924 18:20:12.774513   11602 main.go:141] libmachine: (addons-218885) DBG |     </dhcp>
	I0924 18:20:12.774524   11602 main.go:141] libmachine: (addons-218885) DBG |   </ip>
	I0924 18:20:12.774536   11602 main.go:141] libmachine: (addons-218885) DBG |   
	I0924 18:20:12.774546   11602 main.go:141] libmachine: (addons-218885) DBG | </network>
	I0924 18:20:12.774569   11602 main.go:141] libmachine: (addons-218885) DBG | 
	I0924 18:20:12.779356   11602 main.go:141] libmachine: (addons-218885) DBG | trying to create private KVM network mk-addons-218885 192.168.39.0/24...
	I0924 18:20:12.840345   11602 main.go:141] libmachine: (addons-218885) DBG | private KVM network mk-addons-218885 192.168.39.0/24 created
	I0924 18:20:12.840381   11602 main.go:141] libmachine: (addons-218885) Setting up store path in /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885 ...
	I0924 18:20:12.840394   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:12.840325   11624 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:20:12.840402   11602 main.go:141] libmachine: (addons-218885) Building disk image from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 18:20:12.840503   11602 main.go:141] libmachine: (addons-218885) Downloading /home/jenkins/minikube-integration/19700-3751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 18:20:13.080883   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:13.080784   11624 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa...
	I0924 18:20:13.196783   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:13.196657   11624 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/addons-218885.rawdisk...
	I0924 18:20:13.196813   11602 main.go:141] libmachine: (addons-218885) DBG | Writing magic tar header
	I0924 18:20:13.196826   11602 main.go:141] libmachine: (addons-218885) DBG | Writing SSH key tar header
	I0924 18:20:13.196836   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:13.196759   11624 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885 ...
	I0924 18:20:13.196852   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885
	I0924 18:20:13.196869   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines
	I0924 18:20:13.196911   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885 (perms=drwx------)
	I0924 18:20:13.196926   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:20:13.196942   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751
	I0924 18:20:13.196954   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 18:20:13.196965   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins
	I0924 18:20:13.196984   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines (perms=drwxr-xr-x)
	I0924 18:20:13.196995   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home
	I0924 18:20:13.197007   11602 main.go:141] libmachine: (addons-218885) DBG | Skipping /home - not owner
	I0924 18:20:13.197025   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube (perms=drwxr-xr-x)
	I0924 18:20:13.197038   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751 (perms=drwxrwxr-x)
	I0924 18:20:13.197053   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 18:20:13.197070   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 18:20:13.197083   11602 main.go:141] libmachine: (addons-218885) Creating domain...
	I0924 18:20:13.198004   11602 main.go:141] libmachine: (addons-218885) define libvirt domain using xml: 
	I0924 18:20:13.198029   11602 main.go:141] libmachine: (addons-218885) <domain type='kvm'>
	I0924 18:20:13.198041   11602 main.go:141] libmachine: (addons-218885)   <name>addons-218885</name>
	I0924 18:20:13.198049   11602 main.go:141] libmachine: (addons-218885)   <memory unit='MiB'>4000</memory>
	I0924 18:20:13.198059   11602 main.go:141] libmachine: (addons-218885)   <vcpu>2</vcpu>
	I0924 18:20:13.198066   11602 main.go:141] libmachine: (addons-218885)   <features>
	I0924 18:20:13.198071   11602 main.go:141] libmachine: (addons-218885)     <acpi/>
	I0924 18:20:13.198077   11602 main.go:141] libmachine: (addons-218885)     <apic/>
	I0924 18:20:13.198085   11602 main.go:141] libmachine: (addons-218885)     <pae/>
	I0924 18:20:13.198092   11602 main.go:141] libmachine: (addons-218885)     
	I0924 18:20:13.198097   11602 main.go:141] libmachine: (addons-218885)   </features>
	I0924 18:20:13.198104   11602 main.go:141] libmachine: (addons-218885)   <cpu mode='host-passthrough'>
	I0924 18:20:13.198109   11602 main.go:141] libmachine: (addons-218885)   
	I0924 18:20:13.198116   11602 main.go:141] libmachine: (addons-218885)   </cpu>
	I0924 18:20:13.198121   11602 main.go:141] libmachine: (addons-218885)   <os>
	I0924 18:20:13.198129   11602 main.go:141] libmachine: (addons-218885)     <type>hvm</type>
	I0924 18:20:13.198135   11602 main.go:141] libmachine: (addons-218885)     <boot dev='cdrom'/>
	I0924 18:20:13.198140   11602 main.go:141] libmachine: (addons-218885)     <boot dev='hd'/>
	I0924 18:20:13.198167   11602 main.go:141] libmachine: (addons-218885)     <bootmenu enable='no'/>
	I0924 18:20:13.198188   11602 main.go:141] libmachine: (addons-218885)   </os>
	I0924 18:20:13.198200   11602 main.go:141] libmachine: (addons-218885)   <devices>
	I0924 18:20:13.198211   11602 main.go:141] libmachine: (addons-218885)     <disk type='file' device='cdrom'>
	I0924 18:20:13.198226   11602 main.go:141] libmachine: (addons-218885)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/boot2docker.iso'/>
	I0924 18:20:13.198237   11602 main.go:141] libmachine: (addons-218885)       <target dev='hdc' bus='scsi'/>
	I0924 18:20:13.198247   11602 main.go:141] libmachine: (addons-218885)       <readonly/>
	I0924 18:20:13.198257   11602 main.go:141] libmachine: (addons-218885)     </disk>
	I0924 18:20:13.198267   11602 main.go:141] libmachine: (addons-218885)     <disk type='file' device='disk'>
	I0924 18:20:13.198282   11602 main.go:141] libmachine: (addons-218885)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 18:20:13.198296   11602 main.go:141] libmachine: (addons-218885)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/addons-218885.rawdisk'/>
	I0924 18:20:13.198308   11602 main.go:141] libmachine: (addons-218885)       <target dev='hda' bus='virtio'/>
	I0924 18:20:13.198316   11602 main.go:141] libmachine: (addons-218885)     </disk>
	I0924 18:20:13.198328   11602 main.go:141] libmachine: (addons-218885)     <interface type='network'>
	I0924 18:20:13.198339   11602 main.go:141] libmachine: (addons-218885)       <source network='mk-addons-218885'/>
	I0924 18:20:13.198352   11602 main.go:141] libmachine: (addons-218885)       <model type='virtio'/>
	I0924 18:20:13.198367   11602 main.go:141] libmachine: (addons-218885)     </interface>
	I0924 18:20:13.198380   11602 main.go:141] libmachine: (addons-218885)     <interface type='network'>
	I0924 18:20:13.198390   11602 main.go:141] libmachine: (addons-218885)       <source network='default'/>
	I0924 18:20:13.198398   11602 main.go:141] libmachine: (addons-218885)       <model type='virtio'/>
	I0924 18:20:13.198407   11602 main.go:141] libmachine: (addons-218885)     </interface>
	I0924 18:20:13.198418   11602 main.go:141] libmachine: (addons-218885)     <serial type='pty'>
	I0924 18:20:13.198427   11602 main.go:141] libmachine: (addons-218885)       <target port='0'/>
	I0924 18:20:13.198462   11602 main.go:141] libmachine: (addons-218885)     </serial>
	I0924 18:20:13.198485   11602 main.go:141] libmachine: (addons-218885)     <console type='pty'>
	I0924 18:20:13.198491   11602 main.go:141] libmachine: (addons-218885)       <target type='serial' port='0'/>
	I0924 18:20:13.198499   11602 main.go:141] libmachine: (addons-218885)     </console>
	I0924 18:20:13.198504   11602 main.go:141] libmachine: (addons-218885)     <rng model='virtio'>
	I0924 18:20:13.198513   11602 main.go:141] libmachine: (addons-218885)       <backend model='random'>/dev/random</backend>
	I0924 18:20:13.198518   11602 main.go:141] libmachine: (addons-218885)     </rng>
	I0924 18:20:13.198522   11602 main.go:141] libmachine: (addons-218885)     
	I0924 18:20:13.198527   11602 main.go:141] libmachine: (addons-218885)     
	I0924 18:20:13.198533   11602 main.go:141] libmachine: (addons-218885)   </devices>
	I0924 18:20:13.198538   11602 main.go:141] libmachine: (addons-218885) </domain>
	I0924 18:20:13.198542   11602 main.go:141] libmachine: (addons-218885) 
	I0924 18:20:13.204102   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:cf:a6:03 in network default
	I0924 18:20:13.204625   11602 main.go:141] libmachine: (addons-218885) Ensuring networks are active...
	I0924 18:20:13.204646   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:13.205345   11602 main.go:141] libmachine: (addons-218885) Ensuring network default is active
	I0924 18:20:13.205671   11602 main.go:141] libmachine: (addons-218885) Ensuring network mk-addons-218885 is active
	I0924 18:20:13.207039   11602 main.go:141] libmachine: (addons-218885) Getting domain xml...
	I0924 18:20:13.207785   11602 main.go:141] libmachine: (addons-218885) Creating domain...
	I0924 18:20:14.575302   11602 main.go:141] libmachine: (addons-218885) Waiting to get IP...
	I0924 18:20:14.575964   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:14.576313   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:14.576343   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:14.576303   11624 retry.go:31] will retry after 274.373447ms: waiting for machine to come up
	I0924 18:20:14.852639   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:14.852971   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:14.852999   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:14.852930   11624 retry.go:31] will retry after 320.247846ms: waiting for machine to come up
	I0924 18:20:15.174341   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:15.174769   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:15.174795   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:15.174721   11624 retry.go:31] will retry after 480.520038ms: waiting for machine to come up
	I0924 18:20:15.656403   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:15.656812   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:15.656838   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:15.656779   11624 retry.go:31] will retry after 445.239578ms: waiting for machine to come up
	I0924 18:20:16.103322   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:16.103649   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:16.103675   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:16.103614   11624 retry.go:31] will retry after 512.464509ms: waiting for machine to come up
	I0924 18:20:16.617221   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:16.617724   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:16.617760   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:16.617646   11624 retry.go:31] will retry after 857.414245ms: waiting for machine to come up
	I0924 18:20:17.477266   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:17.477652   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:17.477673   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:17.477626   11624 retry.go:31] will retry after 806.166754ms: waiting for machine to come up
	I0924 18:20:18.285640   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:18.286077   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:18.286100   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:18.286052   11624 retry.go:31] will retry after 1.16238491s: waiting for machine to come up
	I0924 18:20:19.450511   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:19.450884   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:19.450904   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:19.450866   11624 retry.go:31] will retry after 1.335718023s: waiting for machine to come up
	I0924 18:20:20.788441   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:20.788913   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:20.788943   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:20.788872   11624 retry.go:31] will retry after 1.799499594s: waiting for machine to come up
	I0924 18:20:22.589666   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:22.590013   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:22.590062   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:22.589996   11624 retry.go:31] will retry after 1.859729205s: waiting for machine to come up
	I0924 18:20:24.452908   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:24.453276   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:24.453302   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:24.453236   11624 retry.go:31] will retry after 2.767497543s: waiting for machine to come up
	I0924 18:20:27.223890   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:27.224340   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:27.224362   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:27.224297   11624 retry.go:31] will retry after 4.46492502s: waiting for machine to come up
	I0924 18:20:31.694510   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:31.694968   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:31.694990   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:31.694927   11624 retry.go:31] will retry after 4.457689137s: waiting for machine to come up
	I0924 18:20:36.156477   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:36.157022   11602 main.go:141] libmachine: (addons-218885) Found IP for machine: 192.168.39.215
	I0924 18:20:36.157042   11602 main.go:141] libmachine: (addons-218885) Reserving static IP address...
	I0924 18:20:36.157083   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has current primary IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:36.157396   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find host DHCP lease matching {name: "addons-218885", mac: "52:54:00:4f:2a:e2", ip: "192.168.39.215"} in network mk-addons-218885
	I0924 18:20:36.229161   11602 main.go:141] libmachine: (addons-218885) DBG | Getting to WaitForSSH function...
	I0924 18:20:36.229194   11602 main.go:141] libmachine: (addons-218885) Reserved static IP address: 192.168.39.215
	I0924 18:20:36.229207   11602 main.go:141] libmachine: (addons-218885) Waiting for SSH to be available...
	I0924 18:20:36.231373   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:36.231611   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885
	I0924 18:20:36.231644   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find defined IP address of network mk-addons-218885 interface with MAC address 52:54:00:4f:2a:e2
	I0924 18:20:36.231777   11602 main.go:141] libmachine: (addons-218885) DBG | Using SSH client type: external
	I0924 18:20:36.231800   11602 main.go:141] libmachine: (addons-218885) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa (-rw-------)
	I0924 18:20:36.231882   11602 main.go:141] libmachine: (addons-218885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:20:36.231906   11602 main.go:141] libmachine: (addons-218885) DBG | About to run SSH command:
	I0924 18:20:36.231920   11602 main.go:141] libmachine: (addons-218885) DBG | exit 0
	I0924 18:20:36.243616   11602 main.go:141] libmachine: (addons-218885) DBG | SSH cmd err, output: exit status 255: 
	I0924 18:20:36.243646   11602 main.go:141] libmachine: (addons-218885) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0924 18:20:36.243654   11602 main.go:141] libmachine: (addons-218885) DBG | command : exit 0
	I0924 18:20:36.243658   11602 main.go:141] libmachine: (addons-218885) DBG | err     : exit status 255
	I0924 18:20:36.243667   11602 main.go:141] libmachine: (addons-218885) DBG | output  : 
	I0924 18:20:39.245429   11602 main.go:141] libmachine: (addons-218885) DBG | Getting to WaitForSSH function...
	I0924 18:20:39.247941   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.248310   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.248361   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.248472   11602 main.go:141] libmachine: (addons-218885) DBG | Using SSH client type: external
	I0924 18:20:39.248497   11602 main.go:141] libmachine: (addons-218885) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa (-rw-------)
	I0924 18:20:39.248544   11602 main.go:141] libmachine: (addons-218885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:20:39.248581   11602 main.go:141] libmachine: (addons-218885) DBG | About to run SSH command:
	I0924 18:20:39.248599   11602 main.go:141] libmachine: (addons-218885) DBG | exit 0
	I0924 18:20:39.370720   11602 main.go:141] libmachine: (addons-218885) DBG | SSH cmd err, output: <nil>: 
	I0924 18:20:39.371024   11602 main.go:141] libmachine: (addons-218885) KVM machine creation complete!
	I0924 18:20:39.371383   11602 main.go:141] libmachine: (addons-218885) Calling .GetConfigRaw
	I0924 18:20:39.371926   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:39.372115   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:39.372292   11602 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 18:20:39.372308   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:20:39.373716   11602 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 18:20:39.373728   11602 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 18:20:39.373737   11602 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 18:20:39.373742   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:39.375983   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.376314   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.376342   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.376467   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:39.376746   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.376896   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.377041   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:39.377176   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:39.377355   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:39.377366   11602 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 18:20:39.474162   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:20:39.474185   11602 main.go:141] libmachine: Detecting the provisioner...
	I0924 18:20:39.474192   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:39.476622   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.477004   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.477030   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.477220   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:39.477426   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.477578   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.477699   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:39.477853   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:39.478018   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:39.478028   11602 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 18:20:39.575513   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 18:20:39.575630   11602 main.go:141] libmachine: found compatible host: buildroot
	I0924 18:20:39.575647   11602 main.go:141] libmachine: Provisioning with buildroot...
	I0924 18:20:39.575659   11602 main.go:141] libmachine: (addons-218885) Calling .GetMachineName
	I0924 18:20:39.575913   11602 buildroot.go:166] provisioning hostname "addons-218885"
	I0924 18:20:39.575936   11602 main.go:141] libmachine: (addons-218885) Calling .GetMachineName
	I0924 18:20:39.576144   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:39.578676   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.579102   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.579128   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.579285   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:39.579467   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.579584   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.579717   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:39.579893   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:39.580094   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:39.580111   11602 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-218885 && echo "addons-218885" | sudo tee /etc/hostname
	I0924 18:20:39.692677   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-218885
	
	I0924 18:20:39.692711   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:39.695685   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.696027   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.696057   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.696220   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:39.696411   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.696598   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.696757   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:39.696917   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:39.697115   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:39.697138   11602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-218885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-218885/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-218885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 18:20:39.803035   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:20:39.803068   11602 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 18:20:39.803143   11602 buildroot.go:174] setting up certificates
	I0924 18:20:39.803160   11602 provision.go:84] configureAuth start
	I0924 18:20:39.803180   11602 main.go:141] libmachine: (addons-218885) Calling .GetMachineName
	I0924 18:20:39.803472   11602 main.go:141] libmachine: (addons-218885) Calling .GetIP
	I0924 18:20:39.806086   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.806371   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.806397   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.806540   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:39.808868   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.809212   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.809237   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.809404   11602 provision.go:143] copyHostCerts
	I0924 18:20:39.809469   11602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 18:20:39.809588   11602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 18:20:39.809648   11602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 18:20:39.809697   11602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.addons-218885 san=[127.0.0.1 192.168.39.215 addons-218885 localhost minikube]
	I0924 18:20:40.082244   11602 provision.go:177] copyRemoteCerts
	I0924 18:20:40.082308   11602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 18:20:40.082332   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.085171   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.085563   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.085591   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.085797   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.085983   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.086103   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.086224   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:20:40.165135   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 18:20:40.192252   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 18:20:40.219501   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 18:20:40.246264   11602 provision.go:87] duration metric: took 443.085344ms to configureAuth
	I0924 18:20:40.246293   11602 buildroot.go:189] setting minikube options for container-runtime
	I0924 18:20:40.246484   11602 config.go:182] Loaded profile config "addons-218885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:20:40.246570   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.249244   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.249629   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.249653   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.249818   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.250018   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.250175   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.250308   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.250488   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:40.250644   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:40.250658   11602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 18:20:40.468815   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 18:20:40.468854   11602 main.go:141] libmachine: Checking connection to Docker...
	I0924 18:20:40.468866   11602 main.go:141] libmachine: (addons-218885) Calling .GetURL
	I0924 18:20:40.470093   11602 main.go:141] libmachine: (addons-218885) DBG | Using libvirt version 6000000
	I0924 18:20:40.472092   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.472382   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.472406   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.472571   11602 main.go:141] libmachine: Docker is up and running!
	I0924 18:20:40.472589   11602 main.go:141] libmachine: Reticulating splines...
	I0924 18:20:40.472597   11602 client.go:171] duration metric: took 28.057014034s to LocalClient.Create
	I0924 18:20:40.472624   11602 start.go:167] duration metric: took 28.057073554s to libmachine.API.Create "addons-218885"
	I0924 18:20:40.472634   11602 start.go:293] postStartSetup for "addons-218885" (driver="kvm2")
	I0924 18:20:40.472648   11602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:20:40.472666   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:40.472877   11602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:20:40.472906   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.475196   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.475548   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.475575   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.475695   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.475855   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.476016   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.476154   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:20:40.552548   11602 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 18:20:40.556457   11602 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 18:20:40.556481   11602 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 18:20:40.556558   11602 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 18:20:40.556592   11602 start.go:296] duration metric: took 83.950837ms for postStartSetup
	I0924 18:20:40.556636   11602 main.go:141] libmachine: (addons-218885) Calling .GetConfigRaw
	I0924 18:20:40.557160   11602 main.go:141] libmachine: (addons-218885) Calling .GetIP
	I0924 18:20:40.559791   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.560070   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.560094   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.560299   11602 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/config.json ...
	I0924 18:20:40.560458   11602 start.go:128] duration metric: took 28.162828516s to createHost
	I0924 18:20:40.560481   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.562477   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.562977   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.563007   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.563174   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.563321   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.563475   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.563572   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.563723   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:40.563885   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:40.563895   11602 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 18:20:40.659437   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727202040.641796120
	
	I0924 18:20:40.659459   11602 fix.go:216] guest clock: 1727202040.641796120
	I0924 18:20:40.659466   11602 fix.go:229] Guest: 2024-09-24 18:20:40.64179612 +0000 UTC Remote: 2024-09-24 18:20:40.560467466 +0000 UTC m=+28.266972018 (delta=81.328654ms)
	I0924 18:20:40.659526   11602 fix.go:200] guest clock delta is within tolerance: 81.328654ms
	I0924 18:20:40.659536   11602 start.go:83] releasing machines lock for "addons-218885", held for 28.261982282s
	I0924 18:20:40.659570   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:40.659802   11602 main.go:141] libmachine: (addons-218885) Calling .GetIP
	I0924 18:20:40.662293   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.662595   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.662623   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.662765   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:40.663205   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:40.663369   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:40.663431   11602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 18:20:40.663474   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.663578   11602 ssh_runner.go:195] Run: cat /version.json
	I0924 18:20:40.663600   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.666017   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.666043   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.666366   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.666401   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.666427   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.666442   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.666568   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.666579   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.666726   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.666735   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.666891   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.666925   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.667053   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:20:40.667063   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:20:40.762590   11602 ssh_runner.go:195] Run: systemctl --version
	I0924 18:20:40.768558   11602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 18:20:40.923618   11602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 18:20:40.929415   11602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 18:20:40.929483   11602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:20:40.944982   11602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 18:20:40.945009   11602 start.go:495] detecting cgroup driver to use...
	I0924 18:20:40.945091   11602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 18:20:40.960695   11602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 18:20:40.974660   11602 docker.go:217] disabling cri-docker service (if available) ...
	I0924 18:20:40.974712   11602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 18:20:40.988081   11602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 18:20:41.001845   11602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 18:20:41.116471   11602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 18:20:41.278206   11602 docker.go:233] disabling docker service ...
	I0924 18:20:41.278282   11602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 18:20:41.292340   11602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 18:20:41.304936   11602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 18:20:41.427259   11602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 18:20:41.556695   11602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 18:20:41.569928   11602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:20:41.587343   11602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 18:20:41.587395   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.597357   11602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 18:20:41.597420   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.607453   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.617617   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.627570   11602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:20:41.637701   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.647609   11602 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.663924   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.674020   11602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:20:41.683135   11602 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 18:20:41.683188   11602 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 18:20:41.696102   11602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:20:41.705462   11602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:20:41.823495   11602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 18:20:41.913369   11602 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 18:20:41.913456   11602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 18:20:41.918292   11602 start.go:563] Will wait 60s for crictl version
	I0924 18:20:41.918361   11602 ssh_runner.go:195] Run: which crictl
	I0924 18:20:41.921901   11602 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 18:20:41.958038   11602 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 18:20:41.958153   11602 ssh_runner.go:195] Run: crio --version
	I0924 18:20:41.985269   11602 ssh_runner.go:195] Run: crio --version
	I0924 18:20:42.014805   11602 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 18:20:42.016093   11602 main.go:141] libmachine: (addons-218885) Calling .GetIP
	I0924 18:20:42.018614   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:42.019098   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:42.019139   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:42.019258   11602 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 18:20:42.022974   11602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:20:42.034408   11602 kubeadm.go:883] updating cluster {Name:addons-218885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-218885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 18:20:42.034513   11602 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:20:42.034569   11602 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:20:42.064250   11602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 18:20:42.064317   11602 ssh_runner.go:195] Run: which lz4
	I0924 18:20:42.068235   11602 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 18:20:42.072127   11602 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 18:20:42.072165   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 18:20:43.181256   11602 crio.go:462] duration metric: took 1.11306138s to copy over tarball
	I0924 18:20:43.181321   11602 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 18:20:45.254978   11602 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.073631711s)
	I0924 18:20:45.255003   11602 crio.go:469] duration metric: took 2.07372259s to extract the tarball
	I0924 18:20:45.255011   11602 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 18:20:45.291605   11602 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:20:45.334151   11602 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 18:20:45.334171   11602 cache_images.go:84] Images are preloaded, skipping loading
	I0924 18:20:45.334179   11602 kubeadm.go:934] updating node { 192.168.39.215 8443 v1.31.1 crio true true} ...
	I0924 18:20:45.334266   11602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-218885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-218885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 18:20:45.334326   11602 ssh_runner.go:195] Run: crio config
	I0924 18:20:45.379706   11602 cni.go:84] Creating CNI manager for ""
	I0924 18:20:45.379729   11602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 18:20:45.379738   11602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 18:20:45.379759   11602 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-218885 NodeName:addons-218885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 18:20:45.379870   11602 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-218885"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 18:20:45.379931   11602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:20:45.389532   11602 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 18:20:45.389607   11602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 18:20:45.398734   11602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0924 18:20:45.414812   11602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:20:45.430737   11602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0924 18:20:45.447185   11602 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I0924 18:20:45.451002   11602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:20:45.463061   11602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:20:45.578185   11602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:20:45.595455   11602 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885 for IP: 192.168.39.215
	I0924 18:20:45.595478   11602 certs.go:194] generating shared ca certs ...
	I0924 18:20:45.595493   11602 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:45.595628   11602 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 18:20:45.693821   11602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt ...
	I0924 18:20:45.693849   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt: {Name:mk739c8ca5d31150a754381b18341274a55f3194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:45.694000   11602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key ...
	I0924 18:20:45.694011   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key: {Name:mk41697d54972101e4b583bdb12adb625c8a2ce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:45.694084   11602 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 18:20:45.949465   11602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt ...
	I0924 18:20:45.949495   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt: {Name:mk6c99d30fd3bd72ef67c33fc7a8ad8032d9e547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:45.949649   11602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key ...
	I0924 18:20:45.949659   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key: {Name:mk4a9ced92c9b128cb0109242c1c85bc6095111a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:45.949724   11602 certs.go:256] generating profile certs ...
	I0924 18:20:45.949773   11602 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.key
	I0924 18:20:45.949788   11602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt with IP's: []
	I0924 18:20:46.111748   11602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt ...
	I0924 18:20:46.111780   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: {Name:mkcda67505a1d19822a9bd6aa070be1298e2b766 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.111931   11602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.key ...
	I0924 18:20:46.111941   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.key: {Name:mk7ff22fb920d31c4caef16f50e62ca111cf8f23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.112006   11602 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key.5418caf9
	I0924 18:20:46.112025   11602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt.5418caf9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215]
	I0924 18:20:46.368887   11602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt.5418caf9 ...
	I0924 18:20:46.368928   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt.5418caf9: {Name:mk3ea14ef69c0bf68f59451ed6ddde96239c0b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.369111   11602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key.5418caf9 ...
	I0924 18:20:46.369127   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key.5418caf9: {Name:mk094871a112eec146c05c29dae97b6b80490a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.369227   11602 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt.5418caf9 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt
	I0924 18:20:46.369341   11602 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key.5418caf9 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key
	I0924 18:20:46.369416   11602 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.key
	I0924 18:20:46.369442   11602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.crt with IP's: []
	I0924 18:20:46.475111   11602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.crt ...
	I0924 18:20:46.475146   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.crt: {Name:mk14e8d60731076f4aeed39447637ad04acbd93f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.475328   11602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.key ...
	I0924 18:20:46.475341   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.key: {Name:mk1261b7340504044d617837647a0294e6e60c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.475529   11602 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 18:20:46.475574   11602 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 18:20:46.475609   11602 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:20:46.475644   11602 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 18:20:46.476210   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:20:46.510341   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 18:20:46.534245   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:20:46.573657   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 18:20:46.597284   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0924 18:20:46.619923   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 18:20:46.643112   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:20:46.666301   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 18:20:46.689259   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:20:46.712125   11602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 18:20:46.728579   11602 ssh_runner.go:195] Run: openssl version
	I0924 18:20:46.734238   11602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:20:46.744739   11602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:46.749263   11602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:46.749321   11602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:46.755061   11602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:20:46.765777   11602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:20:46.770113   11602 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 18:20:46.770173   11602 kubeadm.go:392] StartCluster: {Name:addons-218885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-218885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:20:46.770261   11602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 18:20:46.770309   11602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 18:20:46.805114   11602 cri.go:89] found id: ""
	I0924 18:20:46.805195   11602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 18:20:46.816665   11602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 18:20:46.826242   11602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 18:20:46.835662   11602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 18:20:46.835682   11602 kubeadm.go:157] found existing configuration files:
	
	I0924 18:20:46.835732   11602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 18:20:46.844574   11602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 18:20:46.844639   11602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 18:20:46.853707   11602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 18:20:46.862302   11602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 18:20:46.862358   11602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 18:20:46.871498   11602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 18:20:46.880100   11602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 18:20:46.880165   11602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 18:20:46.889113   11602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 18:20:46.898369   11602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 18:20:46.898428   11602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 18:20:46.907411   11602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 18:20:46.952940   11602 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 18:20:46.953015   11602 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 18:20:47.040390   11602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 18:20:47.040491   11602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 18:20:47.040607   11602 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 18:20:47.049167   11602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 18:20:47.050888   11602 out.go:235]   - Generating certificates and keys ...
	I0924 18:20:47.050961   11602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 18:20:47.051052   11602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 18:20:47.131678   11602 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 18:20:47.547895   11602 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 18:20:47.601285   11602 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 18:20:47.832128   11602 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 18:20:48.031950   11602 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 18:20:48.032124   11602 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-218885 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0924 18:20:48.210630   11602 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 18:20:48.210816   11602 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-218885 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0924 18:20:48.300960   11602 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 18:20:48.605685   11602 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 18:20:48.809001   11602 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 18:20:48.809097   11602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 18:20:49.163476   11602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 18:20:49.371134   11602 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 18:20:49.529427   11602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 18:20:49.721235   11602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 18:20:49.836924   11602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 18:20:49.837300   11602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 18:20:49.839677   11602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 18:20:49.841378   11602 out.go:235]   - Booting up control plane ...
	I0924 18:20:49.841496   11602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 18:20:49.841559   11602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 18:20:49.841618   11602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 18:20:49.858387   11602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 18:20:49.866657   11602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 18:20:49.866723   11602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 18:20:49.987294   11602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 18:20:49.987476   11602 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 18:20:50.488576   11602 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.853577ms
	I0924 18:20:50.488656   11602 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 18:20:55.489419   11602 kubeadm.go:310] [api-check] The API server is healthy after 5.002843483s
	I0924 18:20:55.501919   11602 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 18:20:55.515354   11602 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 18:20:55.545511   11602 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 18:20:55.545740   11602 kubeadm.go:310] [mark-control-plane] Marking the node addons-218885 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 18:20:55.558654   11602 kubeadm.go:310] [bootstrap-token] Using token: wfmddn.jqm9ftj1c9z5a6vs
	I0924 18:20:55.560273   11602 out.go:235]   - Configuring RBAC rules ...
	I0924 18:20:55.560435   11602 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 18:20:55.568873   11602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 18:20:55.578532   11602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 18:20:55.582388   11602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 18:20:55.586382   11602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 18:20:55.593349   11602 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 18:20:55.897630   11602 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 18:20:56.326166   11602 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 18:20:56.895415   11602 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 18:20:56.896193   11602 kubeadm.go:310] 
	I0924 18:20:56.896289   11602 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 18:20:56.896301   11602 kubeadm.go:310] 
	I0924 18:20:56.896422   11602 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 18:20:56.896443   11602 kubeadm.go:310] 
	I0924 18:20:56.896479   11602 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 18:20:56.896571   11602 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 18:20:56.896662   11602 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 18:20:56.896677   11602 kubeadm.go:310] 
	I0924 18:20:56.896760   11602 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 18:20:56.896768   11602 kubeadm.go:310] 
	I0924 18:20:56.896837   11602 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 18:20:56.896846   11602 kubeadm.go:310] 
	I0924 18:20:56.896915   11602 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 18:20:56.897013   11602 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 18:20:56.897102   11602 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 18:20:56.897113   11602 kubeadm.go:310] 
	I0924 18:20:56.897214   11602 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 18:20:56.897334   11602 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 18:20:56.897344   11602 kubeadm.go:310] 
	I0924 18:20:56.897455   11602 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wfmddn.jqm9ftj1c9z5a6vs \
	I0924 18:20:56.897590   11602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 18:20:56.897626   11602 kubeadm.go:310] 	--control-plane 
	I0924 18:20:56.897639   11602 kubeadm.go:310] 
	I0924 18:20:56.897747   11602 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 18:20:56.897756   11602 kubeadm.go:310] 
	I0924 18:20:56.897876   11602 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wfmddn.jqm9ftj1c9z5a6vs \
	I0924 18:20:56.898032   11602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 18:20:56.898926   11602 kubeadm.go:310] W0924 18:20:46.938376     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:20:56.899246   11602 kubeadm.go:310] W0924 18:20:46.939040     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:20:56.899401   11602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 18:20:56.899428   11602 cni.go:84] Creating CNI manager for ""
	I0924 18:20:56.899438   11602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 18:20:56.901322   11602 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 18:20:56.902863   11602 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 18:20:56.914363   11602 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 18:20:56.930973   11602 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 18:20:56.931114   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:56.931143   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-218885 minikube.k8s.io/updated_at=2024_09_24T18_20_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=addons-218885 minikube.k8s.io/primary=true
	I0924 18:20:57.076312   11602 ops.go:34] apiserver oom_adj: -16
	I0924 18:20:57.076379   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:57.576425   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:58.077347   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:58.577119   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:59.076927   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:59.577230   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:00.077137   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:00.577008   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:00.658298   11602 kubeadm.go:1113] duration metric: took 3.727240888s to wait for elevateKubeSystemPrivileges
	I0924 18:21:00.658328   11602 kubeadm.go:394] duration metric: took 13.888161582s to StartCluster
	I0924 18:21:00.658352   11602 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:21:00.658482   11602 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:21:00.658929   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:21:00.659138   11602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0924 18:21:00.659158   11602 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:21:00.659219   11602 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0924 18:21:00.659336   11602 addons.go:69] Setting yakd=true in profile "addons-218885"
	I0924 18:21:00.659349   11602 addons.go:69] Setting inspektor-gadget=true in profile "addons-218885"
	I0924 18:21:00.659352   11602 addons.go:69] Setting default-storageclass=true in profile "addons-218885"
	I0924 18:21:00.659366   11602 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-218885"
	I0924 18:21:00.659371   11602 addons.go:69] Setting volcano=true in profile "addons-218885"
	I0924 18:21:00.659357   11602 addons.go:234] Setting addon yakd=true in "addons-218885"
	I0924 18:21:00.659381   11602 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-218885"
	I0924 18:21:00.659390   11602 addons.go:69] Setting volumesnapshots=true in profile "addons-218885"
	I0924 18:21:00.659393   11602 addons.go:69] Setting ingress=true in profile "addons-218885"
	I0924 18:21:00.659399   11602 addons.go:234] Setting addon volumesnapshots=true in "addons-218885"
	I0924 18:21:00.659414   11602 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-218885"
	I0924 18:21:00.659374   11602 addons.go:234] Setting addon inspektor-gadget=true in "addons-218885"
	I0924 18:21:00.659424   11602 addons.go:234] Setting addon ingress=true in "addons-218885"
	I0924 18:21:00.659424   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659447   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659367   11602 addons.go:69] Setting storage-provisioner=true in profile "addons-218885"
	I0924 18:21:00.659550   11602 addons.go:234] Setting addon storage-provisioner=true in "addons-218885"
	I0924 18:21:00.659573   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659418   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659383   11602 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-218885"
	I0924 18:21:00.659842   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.659864   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.659875   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659887   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659936   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.659968   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659842   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.659993   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659385   11602 addons.go:234] Setting addon volcano=true in "addons-218885"
	I0924 18:21:00.659395   11602 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-218885"
	I0924 18:21:00.660031   11602 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-218885"
	I0924 18:21:00.659449   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.660131   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.660177   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660206   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.660213   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.660215   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660246   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659454   11602 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-218885"
	I0924 18:21:00.659458   11602 addons.go:69] Setting gcp-auth=true in profile "addons-218885"
	I0924 18:21:00.660330   11602 mustload.go:65] Loading cluster: addons-218885
	I0924 18:21:00.660373   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660401   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.660467   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660487   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659438   11602 addons.go:69] Setting registry=true in profile "addons-218885"
	I0924 18:21:00.660541   11602 config.go:182] Loaded profile config "addons-218885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:21:00.660588   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660542   11602 addons.go:234] Setting addon registry=true in "addons-218885"
	I0924 18:21:00.660620   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659357   11602 config.go:182] Loaded profile config "addons-218885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:21:00.660726   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659470   11602 addons.go:69] Setting ingress-dns=true in profile "addons-218885"
	I0924 18:21:00.660749   11602 addons.go:234] Setting addon ingress-dns=true in "addons-218885"
	I0924 18:21:00.660774   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.660816   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.660882   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660899   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.661056   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.661141   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.661204   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.661240   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.661266   11602 out.go:177] * Verifying Kubernetes components...
	I0924 18:21:00.661080   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.661384   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659461   11602 addons.go:69] Setting cloud-spanner=true in profile "addons-218885"
	I0924 18:21:00.661444   11602 addons.go:234] Setting addon cloud-spanner=true in "addons-218885"
	I0924 18:21:00.661469   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659384   11602 addons.go:69] Setting metrics-server=true in profile "addons-218885"
	I0924 18:21:00.661619   11602 addons.go:234] Setting addon metrics-server=true in "addons-218885"
	I0924 18:21:00.661644   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.661822   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.661841   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.661979   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.662002   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.672130   11602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:21:00.680735   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43731
	I0924 18:21:00.680735   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0924 18:21:00.681044   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34977
	I0924 18:21:00.681236   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.681465   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0924 18:21:00.681785   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.681838   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.681788   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.682083   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.682102   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.682225   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.682240   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.682295   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0924 18:21:00.682410   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.682419   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.682537   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.682552   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.682600   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.682643   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.682683   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.682749   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.691487   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.691518   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.691625   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I0924 18:21:00.691743   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.691812   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35801
	I0924 18:21:00.691839   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.691926   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.691968   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.692170   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.692210   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.692229   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.692243   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.692638   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.692695   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.692721   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.693073   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.693157   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.693172   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.693195   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.693371   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.693596   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.693635   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.693926   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.694150   11602 addons.go:234] Setting addon default-storageclass=true in "addons-218885"
	I0924 18:21:00.694198   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.694456   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.694483   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.694546   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.694577   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.695678   11602 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-218885"
	I0924 18:21:00.695724   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.696084   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.696123   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.699951   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.700319   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.700355   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.713968   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0924 18:21:00.714463   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.715097   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.715118   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.715521   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.715582   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0924 18:21:00.716260   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.716297   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.716505   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.724819   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38155
	I0924 18:21:00.725028   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0924 18:21:00.725630   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.726076   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.726173   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.726195   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.726596   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.727232   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.727266   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.727423   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0924 18:21:00.728015   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.728034   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.728196   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
	I0924 18:21:00.728690   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.728703   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.728762   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.729325   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.729349   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.729621   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.729633   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.729639   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.729653   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.730009   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.730051   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.730921   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.731302   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.731334   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.732823   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.733011   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:00.733030   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:00.734792   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:00.734795   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:00.734814   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:00.734823   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:00.734840   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:00.735052   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:00.735064   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	W0924 18:21:00.735144   11602 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0924 18:21:00.748351   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I0924 18:21:00.750720   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
	I0924 18:21:00.750728   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I0924 18:21:00.751162   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.751247   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.751319   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.751440   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.751456   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.751567   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.751585   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.756724   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35171
	I0924 18:21:00.756730   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.756778   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40953
	I0924 18:21:00.756725   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0924 18:21:00.756847   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.756861   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.756930   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.757362   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.757369   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.757379   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.757891   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.757905   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.757921   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.757933   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.757908   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.758358   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.758456   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.758697   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.758697   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.759435   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39903
	I0924 18:21:00.759442   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.759636   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.759920   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.760023   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.760374   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.760387   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.760408   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.760480   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.760610   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.761142   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.761179   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.761488   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.761503   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.761847   11602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0924 18:21:00.761975   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.762202   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.763064   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0924 18:21:00.763905   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.764201   11602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 18:21:00.764244   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.764687   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.764830   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.764843   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.765214   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.765520   11602 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0924 18:21:00.765754   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.765882   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.766152   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0924 18:21:00.766854   11602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 18:21:00.766965   11602 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 18:21:00.767251   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0924 18:21:00.767271   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.767695   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0924 18:21:00.767715   11602 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0924 18:21:00.767749   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.768686   11602 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0924 18:21:00.768698   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0924 18:21:00.768713   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.770065   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39951
	I0924 18:21:00.770538   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.771457   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.771477   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.771872   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.772426   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.772458   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.773088   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34187
	I0924 18:21:00.774506   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.774988   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.775391   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.775411   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.775446   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39697
	I0924 18:21:00.775557   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.775742   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43801
	I0924 18:21:00.775762   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.776043   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.776070   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.776313   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0924 18:21:00.776431   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.776447   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.776497   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.776748   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.776767   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.776798   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.776829   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.777190   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.777241   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.777281   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.777317   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.777820   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.777981   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.778090   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.778249   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.778261   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.778415   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.778483   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.778799   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37345
	I0924 18:21:00.779313   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.779385   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0924 18:21:00.779922   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.779987   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45887
	I0924 18:21:00.780000   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.780014   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.780328   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.781720   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0924 18:21:00.781840   11602 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0924 18:21:00.783389   11602 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0924 18:21:00.783406   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0924 18:21:00.783408   11602 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0924 18:21:00.783426   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.785670   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0924 18:21:00.786392   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.786875   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.786904   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.787147   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.787290   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.787460   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.787571   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.787929   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0924 18:21:00.789553   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0924 18:21:00.789818   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0924 18:21:00.790777   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0924 18:21:00.790798   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0924 18:21:00.790817   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.791841   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38635
	I0924 18:21:00.793491   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.793863   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.793884   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.794037   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.794196   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.794343   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.794479   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.795306   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.795325   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.795413   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.795716   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.795878   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.795893   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.795928   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.795965   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.796083   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.796101   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.796213   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.796228   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.796239   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.796382   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.796422   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.796444   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.796634   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.796692   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.797108   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.797124   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.797174   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.797214   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.797254   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.797672   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.797708   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.797893   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.797947   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.798167   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.799160   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36489
	I0924 18:21:00.799285   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.799329   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.799809   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.800183   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.800664   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.800835   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.800844   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.801181   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.801262   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.801710   11602 out.go:177]   - Using image docker.io/registry:2.8.3
	I0924 18:21:00.801722   11602 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 18:21:00.801827   11602 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0924 18:21:00.802746   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.802972   11602 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0924 18:21:00.803140   11602 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:21:00.803158   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 18:21:00.803175   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.803332   11602 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0924 18:21:00.803346   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0924 18:21:00.803360   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.804116   11602 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0924 18:21:00.804171   11602 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0924 18:21:00.804317   11602 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0924 18:21:00.804328   11602 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0924 18:21:00.804343   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.806039   11602 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0924 18:21:00.806052   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0924 18:21:00.806068   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.807823   11602 out.go:177]   - Using image docker.io/busybox:stable
	I0924 18:21:00.807997   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.808507   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.808913   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.808939   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.809198   11602 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 18:21:00.809214   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0924 18:21:00.809230   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.809866   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.809901   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.809952   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.809996   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.810009   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.810036   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.810052   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.810069   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.810710   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.810758   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.810762   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.810798   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.810928   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.810938   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.810973   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.811072   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.811124   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.811175   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.811575   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.811599   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.811747   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.811961   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.812105   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.812231   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.813492   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.813801   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.813819   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.813949   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.814102   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.814242   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.814374   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.819089   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0924 18:21:00.819472   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.819662   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36757
	I0924 18:21:00.819981   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.819993   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.820026   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.820352   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.820499   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.820570   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.820585   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.820921   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.821036   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.822394   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.822536   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.822579   11602 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 18:21:00.822590   11602 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 18:21:00.822614   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.824394   11602 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0924 18:21:00.825222   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.825626   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.825642   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.825660   11602 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 18:21:00.825679   11602 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 18:21:00.825698   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.825895   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.826045   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.826169   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.826315   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.828341   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.828768   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.828797   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.828911   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.829107   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.829220   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.829309   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.833381   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38481
	I0924 18:21:00.833708   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.834195   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.834214   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.834741   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.834967   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.836909   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.838863   11602 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0924 18:21:00.840172   11602 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0924 18:21:00.840190   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0924 18:21:00.840204   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.843461   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.843939   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.843964   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.844120   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.844264   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.844395   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.844488   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.925784   11602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0924 18:21:00.967714   11602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:21:01.124083   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0924 18:21:01.139520   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0924 18:21:01.209659   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0924 18:21:01.209681   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0924 18:21:01.211490   11602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0924 18:21:01.211509   11602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0924 18:21:01.230706   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 18:21:01.259266   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 18:21:01.265419   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0924 18:21:01.265444   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0924 18:21:01.267525   11602 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0924 18:21:01.267542   11602 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0924 18:21:01.270870   11602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 18:21:01.270886   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0924 18:21:01.294065   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:21:01.302436   11602 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0924 18:21:01.302464   11602 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0924 18:21:01.303436   11602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0924 18:21:01.303457   11602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0924 18:21:01.336902   11602 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0924 18:21:01.336926   11602 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0924 18:21:01.390129   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 18:21:01.405905   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0924 18:21:01.443401   11602 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0924 18:21:01.443421   11602 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0924 18:21:01.460206   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0924 18:21:01.460233   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0924 18:21:01.489629   11602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 18:21:01.489659   11602 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 18:21:01.516924   11602 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0924 18:21:01.516952   11602 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0924 18:21:01.527602   11602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0924 18:21:01.527630   11602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0924 18:21:01.530327   11602 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0924 18:21:01.530344   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0924 18:21:01.544683   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0924 18:21:01.544711   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0924 18:21:01.689986   11602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 18:21:01.690011   11602 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 18:21:01.705932   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0924 18:21:01.705958   11602 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0924 18:21:01.740697   11602 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0924 18:21:01.740721   11602 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0924 18:21:01.775169   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 18:21:01.804259   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0924 18:21:01.804283   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0924 18:21:01.819198   11602 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0924 18:21:01.819230   11602 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0924 18:21:01.827355   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0924 18:21:01.855195   11602 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:21:01.855219   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0924 18:21:01.951137   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0924 18:21:01.951166   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0924 18:21:01.969440   11602 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0924 18:21:01.969463   11602 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0924 18:21:02.069888   11602 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0924 18:21:02.069915   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0924 18:21:02.099859   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:21:02.231068   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0924 18:21:02.231095   11602 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0924 18:21:02.305967   11602 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0924 18:21:02.305990   11602 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0924 18:21:02.390434   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0924 18:21:02.434755   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0924 18:21:02.434778   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0924 18:21:02.586683   11602 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0924 18:21:02.586715   11602 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0924 18:21:02.733250   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0924 18:21:02.733348   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0924 18:21:02.792924   11602 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 18:21:02.792950   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0924 18:21:03.055872   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 18:21:03.055895   11602 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0924 18:21:03.132217   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 18:21:03.134229   11602 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.208412456s)
	I0924 18:21:03.134255   11602 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0924 18:21:03.134280   11602 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.166538652s)
	I0924 18:21:03.134987   11602 node_ready.go:35] waiting up to 6m0s for node "addons-218885" to be "Ready" ...
	I0924 18:21:03.139952   11602 node_ready.go:49] node "addons-218885" has status "Ready":"True"
	I0924 18:21:03.139976   11602 node_ready.go:38] duration metric: took 4.969165ms for node "addons-218885" to be "Ready" ...
	I0924 18:21:03.139986   11602 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:21:03.150885   11602 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:03.433867   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 18:21:03.668937   11602 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-218885" context rescaled to 1 replicas
	I0924 18:21:03.814522   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.690406906s)
	I0924 18:21:03.814578   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:03.814590   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:03.814905   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:03.814918   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:03.814925   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:03.814936   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:03.814944   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:03.815212   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:03.815229   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:05.193674   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:07.675146   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:07.776279   11602 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0924 18:21:07.776319   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:07.779561   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:07.780040   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:07.780063   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:07.780297   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:07.780488   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:07.780661   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:07.780787   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:07.972822   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.833257544s)
	I0924 18:21:07.972874   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.972887   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.972834   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.742092294s)
	I0924 18:21:07.972905   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.713615593s)
	I0924 18:21:07.972935   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.972950   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.972937   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973033   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973066   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.567135616s)
	I0924 18:21:07.973034   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.582880208s)
	I0924 18:21:07.972999   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.678903665s)
	I0924 18:21:07.973102   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973112   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973145   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973154   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973200   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973225   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973225   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973227   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973230   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.198037335s)
	I0924 18:21:07.973239   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973240   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.973240   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973249   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973251   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973257   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973257   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.973262   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973267   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973263   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.14588148s)
	I0924 18:21:07.973276   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973283   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973292   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973374   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.873477591s)
	W0924 18:21:07.973425   11602 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0924 18:21:07.973466   11602 retry.go:31] will retry after 341.273334ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0924 18:21:07.973483   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973512   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973519   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.973526   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973532   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973533   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973543   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.973551   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973557   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973595   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.583073615s)
	I0924 18:21:07.973620   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973630   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973771   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973814   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973815   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.841563977s)
	I0924 18:21:07.973828   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973844   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973850   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.973858   11602 addons.go:475] Verifying addon metrics-server=true in "addons-218885"
	I0924 18:21:07.973878   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973891   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973971   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973979   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.974078   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.974087   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.974094   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.974100   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.974255   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.974275   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.974281   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.974287   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.974292   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.974331   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.974353   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.974359   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.974366   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.974373   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.974966   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.974991   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.974998   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.975194   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.975217   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.975223   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.975231   11602 addons.go:475] Verifying addon registry=true in "addons-218885"
	I0924 18:21:07.975723   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.975745   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.975768   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.975774   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.975931   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.975939   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.975946   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.975952   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.976518   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.976541   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.976548   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.976693   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.976707   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.976717   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.976725   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.976754   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.976765   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.976773   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.976780   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.976888   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.976902   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.976910   11602 addons.go:475] Verifying addon ingress=true in "addons-218885"
	I0924 18:21:07.976949   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.977417   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.977442   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.977448   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.976973   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.977553   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.978625   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.978641   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.979225   11602 out.go:177] * Verifying registry addon...
	I0924 18:21:07.979369   11602 out.go:177] * Verifying ingress addon...
	I0924 18:21:07.980099   11602 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-218885 service yakd-dashboard -n yakd-dashboard
	
	I0924 18:21:07.981598   11602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0924 18:21:07.981987   11602 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0924 18:21:07.999231   11602 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0924 18:21:07.999256   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:07.999600   11602 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0924 18:21:07.999619   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:08.005488   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:08.005509   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:08.005801   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:08.005847   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:08.005864   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:08.017897   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:08.017922   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:08.018287   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:08.018306   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:08.058607   11602 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0924 18:21:08.094929   11602 addons.go:234] Setting addon gcp-auth=true in "addons-218885"
	I0924 18:21:08.094992   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:08.095419   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:08.095475   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:08.110585   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34813
	I0924 18:21:08.111040   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:08.111584   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:08.111611   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:08.111964   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:08.112535   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:08.112578   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:08.127155   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0924 18:21:08.127631   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:08.128121   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:08.128146   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:08.128433   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:08.128606   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:08.130080   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:08.130278   11602 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0924 18:21:08.130305   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:08.133126   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:08.133582   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:08.133611   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:08.133777   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:08.133930   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:08.134104   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:08.134250   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:08.315216   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:21:08.488445   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:08.488845   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:09.002788   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:09.003393   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:09.077458   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.643536692s)
	I0924 18:21:09.077506   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:09.077519   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:09.077783   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:09.077837   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:09.077851   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:09.077853   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:09.077867   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:09.078166   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:09.078214   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:09.078225   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:09.078240   11602 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-218885"
	I0924 18:21:09.079280   11602 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0924 18:21:09.080127   11602 out.go:177] * Verifying csi-hostpath-driver addon...
	I0924 18:21:09.081849   11602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 18:21:09.082510   11602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0924 18:21:09.083069   11602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0924 18:21:09.083086   11602 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0924 18:21:09.113707   11602 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0924 18:21:09.113739   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:09.175252   11602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0924 18:21:09.175277   11602 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0924 18:21:09.215574   11602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 18:21:09.215599   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0924 18:21:09.270926   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 18:21:09.486696   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:09.486738   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:09.587547   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:09.986544   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:09.987121   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:10.087460   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:10.156758   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:10.264232   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.948944982s)
	I0924 18:21:10.264285   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:10.264299   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:10.264666   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:10.264719   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:10.264726   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:10.264738   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:10.264746   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:10.264961   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:10.264973   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:10.556445   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:10.559448   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:10.822097   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:10.873812   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.602842869s)
	I0924 18:21:10.873863   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:10.873886   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:10.874154   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:10.874174   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:10.874183   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:10.874191   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:10.874219   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:10.874421   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:10.874465   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:10.874474   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:10.876389   11602 addons.go:475] Verifying addon gcp-auth=true in "addons-218885"
	I0924 18:21:10.878112   11602 out.go:177] * Verifying gcp-auth addon...
	I0924 18:21:10.879991   11602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0924 18:21:10.914619   11602 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0924 18:21:10.914644   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:10.986616   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:10.987116   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:11.087458   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:11.383545   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:11.486763   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:11.486957   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:11.640030   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:11.884322   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:11.985458   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:11.986775   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:12.088092   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:12.156950   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:12.383370   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:12.485195   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:12.487941   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:12.587459   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:12.883672   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:12.986303   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:12.986526   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:13.087330   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:13.385285   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:13.485959   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:13.486129   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:13.586793   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:13.884002   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:13.985294   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:13.987442   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:14.087331   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:14.384138   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:14.485676   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:14.486525   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:14.587163   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:14.673311   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:14.883885   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:14.985667   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:14.987837   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:15.087254   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:15.538287   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:15.538499   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:15.538661   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:15.587780   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:15.883673   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:15.986434   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:15.986755   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:16.087600   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:16.161186   11602 pod_ready.go:98] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:15 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.215 HostIPs:[{IP:192.168.39
.215}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-24 18:21:01 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-24 18:21:05 +0000 UTC,FinishedAt:2024-09-24 18:21:15 +0000 UTC,ContainerID:cri-o://1a7be4e5a265f4ca7f7b1e9046b67fbd27b0a2df0b4180e732ae601ee76e0003,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://1a7be4e5a265f4ca7f7b1e9046b67fbd27b0a2df0b4180e732ae601ee76e0003 Started:0xc0016c5620 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001432d90} {Name:kube-api-access-dx4gt MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001432da0}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0924 18:21:16.161211   11602 pod_ready.go:82] duration metric: took 13.010302575s for pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace to be "Ready" ...
	E0924 18:21:16.161224   11602 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:15 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.215 HostIPs:[{IP:192.168.39.215}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-24 18:21:01 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-24 18:21:05 +0000 UTC,FinishedAt:2024-09-24 18:21:15 +0000 UTC,ContainerID:cri-o://1a7be4e5a265f4ca7f7b1e9046b67fbd27b0a2df0b4180e732ae601ee76e0003,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://1a7be4e5a265f4ca7f7b1e9046b67fbd27b0a2df0b4180e732ae601ee76e0003 Started:0xc0016c5620 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001432d90} {Name:kube-api-access-dx4gt MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001432da0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0924 18:21:16.161239   11602 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:16.383548   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:16.486230   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:16.487442   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:16.586690   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:16.884631   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:16.986006   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:16.986774   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:17.087310   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:17.383898   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:17.486612   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:17.487453   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:17.586919   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:17.883638   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:17.987330   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:17.987849   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:18.089144   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:18.167517   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:18.383520   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:18.486806   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:18.486918   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:18.588925   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:18.883462   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:18.986014   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:18.986560   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:19.086554   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:19.383070   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:19.484874   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:19.486920   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:19.587560   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:19.883992   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:19.986152   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:19.987408   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:20.086874   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:20.383440   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:20.486268   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:20.486550   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:20.791631   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:20.793936   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:20.883763   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:20.986920   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:20.987056   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:21.088233   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:21.383254   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:21.486556   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:21.486845   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:21.587198   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:21.884631   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:21.986396   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:21.986589   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:22.087981   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:22.383307   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:22.486130   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:22.487114   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:22.587895   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:22.883205   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:22.986726   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:22.987810   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:23.087527   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:23.167137   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:23.382922   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:23.486893   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:23.487141   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:23.586653   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:23.887051   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:23.992735   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:23.993112   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:24.088192   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:24.384102   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:24.485524   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:24.486088   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:24.588291   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:24.883718   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:24.986064   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:24.986669   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:25.086972   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:25.167765   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:25.385694   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:25.487039   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:25.487327   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:25.587485   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:25.883440   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:25.987089   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:25.987473   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:26.087677   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:26.383334   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:26.486844   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:26.487823   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:26.586734   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:26.883494   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:26.986274   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:26.986679   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:27.087587   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:27.383764   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:27.486172   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:27.486167   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:27.586436   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:27.667175   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:27.883579   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:27.986382   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:27.986773   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:28.086697   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:28.383293   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:28.493330   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:28.505220   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:28.586892   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:28.883915   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:28.985128   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:28.986961   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:29.086970   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:29.382946   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:29.485425   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:29.487089   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:29.587540   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:29.670087   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:29.884302   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:29.985838   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:29.986275   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:30.086421   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:30.385253   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:30.485483   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:30.486689   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:30.588361   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:30.883735   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:30.986783   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:30.987125   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:31.088911   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:31.385049   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:31.486543   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:31.486992   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:31.587160   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:31.883656   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:31.985711   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:31.986231   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:32.086502   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:32.167554   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:32.384448   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:32.486308   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:32.486463   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:32.587554   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:32.883253   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:32.987205   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:32.987734   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:33.087771   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:33.384995   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:33.486934   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:33.487318   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:33.586663   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:33.884321   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:33.986319   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:33.987702   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:34.087618   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:34.168690   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:34.387765   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:34.485791   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:34.486938   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:34.587761   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:34.884048   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:34.985832   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:34.986032   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:35.087501   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:35.386323   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:35.486147   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:35.486397   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:35.586931   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:35.884466   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:35.987056   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:35.987253   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:36.086959   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:36.383855   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:36.486473   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:36.486749   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:36.586935   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:36.667520   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:36.884713   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:36.985614   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:36.987395   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:37.094813   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:37.383846   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:37.486004   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:37.486280   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:37.588888   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:38.231455   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:38.234409   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:38.239132   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:38.239417   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:38.383733   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:38.486322   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:38.486594   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:38.587058   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:38.667664   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:38.883555   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:38.986183   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:38.986218   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:39.086393   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:39.383891   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:39.485904   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:39.486274   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:39.586892   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:39.883990   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:39.985035   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:39.986333   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:40.086738   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:40.383797   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:40.486109   11602 kapi.go:107] duration metric: took 32.504507933s to wait for kubernetes.io/minikube-addons=registry ...
	I0924 18:21:40.486350   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:40.586745   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:40.882856   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:40.986205   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:41.086472   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:41.167497   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:41.384061   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:41.486569   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:41.587079   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:41.883691   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:41.987379   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:42.086661   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:42.592448   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:42.593329   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:42.593353   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:42.884026   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:42.986740   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:43.087210   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:43.384130   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:43.486932   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:43.587734   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:43.671555   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:43.884139   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:43.986534   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:44.087447   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:44.383601   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:44.486943   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:44.587092   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:44.883703   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:44.986744   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:45.086822   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:45.384617   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:45.486345   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:45.586804   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:45.674703   11602 pod_ready.go:93] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:45.674728   11602 pod_ready.go:82] duration metric: took 29.513479171s for pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.674737   11602 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.682099   11602 pod_ready.go:93] pod "etcd-addons-218885" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:45.682125   11602 pod_ready.go:82] duration metric: took 7.380934ms for pod "etcd-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.682136   11602 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.727932   11602 pod_ready.go:93] pod "kube-apiserver-addons-218885" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:45.727960   11602 pod_ready.go:82] duration metric: took 45.815667ms for pod "kube-apiserver-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.727973   11602 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.736186   11602 pod_ready.go:93] pod "kube-controller-manager-addons-218885" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:45.736205   11602 pod_ready.go:82] duration metric: took 8.225404ms for pod "kube-controller-manager-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.736216   11602 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jsjnj" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.741087   11602 pod_ready.go:93] pod "kube-proxy-jsjnj" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:45.741103   11602 pod_ready.go:82] duration metric: took 4.881511ms for pod "kube-proxy-jsjnj" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.741111   11602 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.883401   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:45.988310   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:46.066604   11602 pod_ready.go:93] pod "kube-scheduler-addons-218885" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:46.066631   11602 pod_ready.go:82] duration metric: took 325.512397ms for pod "kube-scheduler-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:46.066644   11602 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-qhkcp" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:46.087500   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:46.384729   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:46.465983   11602 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-qhkcp" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:46.466004   11602 pod_ready.go:82] duration metric: took 399.352493ms for pod "nvidia-device-plugin-daemonset-qhkcp" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:46.466012   11602 pod_ready.go:39] duration metric: took 43.326012607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:21:46.466029   11602 api_server.go:52] waiting for apiserver process to appear ...
	I0924 18:21:46.466084   11602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:21:46.483386   11602 api_server.go:72] duration metric: took 45.824195071s to wait for apiserver process to appear ...
	I0924 18:21:46.483405   11602 api_server.go:88] waiting for apiserver healthz status ...
	I0924 18:21:46.483425   11602 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I0924 18:21:46.486475   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:46.489100   11602 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I0924 18:21:46.490451   11602 api_server.go:141] control plane version: v1.31.1
	I0924 18:21:46.490474   11602 api_server.go:131] duration metric: took 7.061904ms to wait for apiserver health ...
	I0924 18:21:46.490484   11602 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 18:21:46.588064   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:46.672865   11602 system_pods.go:59] 17 kube-system pods found
	I0924 18:21:46.672904   11602 system_pods.go:61] "coredns-7c65d6cfc9-wbgv9" [b793eb56-d95a-49f1-8294-1ab4837d5d36] Running
	I0924 18:21:46.672916   11602 system_pods.go:61] "csi-hostpath-attacher-0" [f054c47d-be0e-47ac-bb9a-665fff0e4ccc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0924 18:21:46.672926   11602 system_pods.go:61] "csi-hostpath-resizer-0" [aea59387-31a8-4570-aa63-aaa5b6a54eb7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0924 18:21:46.672936   11602 system_pods.go:61] "csi-hostpathplugin-rjjfm" [1af2c700-d42a-499b-89c3-badfa6dae8c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0924 18:21:46.672942   11602 system_pods.go:61] "etcd-addons-218885" [a288635e-a61b-4c7d-b1dc-90910c161b87] Running
	I0924 18:21:46.672948   11602 system_pods.go:61] "kube-apiserver-addons-218885" [af891cb5-c6e3-43c5-a480-76844da48620] Running
	I0924 18:21:46.672954   11602 system_pods.go:61] "kube-controller-manager-addons-218885" [2df23cca-721a-4fe5-8c91-8c3207ce708e] Running
	I0924 18:21:46.672962   11602 system_pods.go:61] "kube-ingress-dns-minikube" [209a83c9-7b47-44e1-8897-682ab287a114] Running
	I0924 18:21:46.672971   11602 system_pods.go:61] "kube-proxy-jsjnj" [07996bfd-1ae9-4e9c-9148-14966458de66] Running
	I0924 18:21:46.672979   11602 system_pods.go:61] "kube-scheduler-addons-218885" [43c814bf-b252-4f9f-a5e1-50a0e68c2ff3] Running
	I0924 18:21:46.672987   11602 system_pods.go:61] "metrics-server-84c5f94fbc-pkzn4" [65ed5b0c-3307-4c48-b8dc-666848d353fc] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 18:21:46.672995   11602 system_pods.go:61] "nvidia-device-plugin-daemonset-qhkcp" [2d4afd4b-8f05-4a66-aecf-ac6db891b2a7] Running
	I0924 18:21:46.673003   11602 system_pods.go:61] "registry-66c9cd494c-b94p9" [bb39eff0-510f-4e28-b3b7-a246e7ca880c] Running
	I0924 18:21:46.673007   11602 system_pods.go:61] "registry-proxy-wpjp5" [e715cd68-83d0-4850-abc2-b9a3f139e6f8] Running
	I0924 18:21:46.673014   11602 system_pods.go:61] "snapshot-controller-56fcc65765-775tk" [a08ba94c-acb8-4274-8018-b576a56c94f1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:21:46.673022   11602 system_pods.go:61] "snapshot-controller-56fcc65765-q2xbm" [8316fcb5-fc58-46a5-821d-790e06ea09ed] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:21:46.673027   11602 system_pods.go:61] "storage-provisioner" [43a66ff5-32a5-4cdb-9073-da217f1138f1] Running
	I0924 18:21:46.673035   11602 system_pods.go:74] duration metric: took 182.544371ms to wait for pod list to return data ...
	I0924 18:21:46.673044   11602 default_sa.go:34] waiting for default service account to be created ...
	I0924 18:21:46.864990   11602 default_sa.go:45] found service account: "default"
	I0924 18:21:46.865016   11602 default_sa.go:55] duration metric: took 191.965785ms for default service account to be created ...
	I0924 18:21:46.865028   11602 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 18:21:46.884297   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:46.986602   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:47.070157   11602 system_pods.go:86] 17 kube-system pods found
	I0924 18:21:47.070185   11602 system_pods.go:89] "coredns-7c65d6cfc9-wbgv9" [b793eb56-d95a-49f1-8294-1ab4837d5d36] Running
	I0924 18:21:47.070195   11602 system_pods.go:89] "csi-hostpath-attacher-0" [f054c47d-be0e-47ac-bb9a-665fff0e4ccc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0924 18:21:47.070203   11602 system_pods.go:89] "csi-hostpath-resizer-0" [aea59387-31a8-4570-aa63-aaa5b6a54eb7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0924 18:21:47.070211   11602 system_pods.go:89] "csi-hostpathplugin-rjjfm" [1af2c700-d42a-499b-89c3-badfa6dae8c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0924 18:21:47.070215   11602 system_pods.go:89] "etcd-addons-218885" [a288635e-a61b-4c7d-b1dc-90910c161b87] Running
	I0924 18:21:47.070219   11602 system_pods.go:89] "kube-apiserver-addons-218885" [af891cb5-c6e3-43c5-a480-76844da48620] Running
	I0924 18:21:47.070223   11602 system_pods.go:89] "kube-controller-manager-addons-218885" [2df23cca-721a-4fe5-8c91-8c3207ce708e] Running
	I0924 18:21:47.070226   11602 system_pods.go:89] "kube-ingress-dns-minikube" [209a83c9-7b47-44e1-8897-682ab287a114] Running
	I0924 18:21:47.070229   11602 system_pods.go:89] "kube-proxy-jsjnj" [07996bfd-1ae9-4e9c-9148-14966458de66] Running
	I0924 18:21:47.070232   11602 system_pods.go:89] "kube-scheduler-addons-218885" [43c814bf-b252-4f9f-a5e1-50a0e68c2ff3] Running
	I0924 18:21:47.070237   11602 system_pods.go:89] "metrics-server-84c5f94fbc-pkzn4" [65ed5b0c-3307-4c48-b8dc-666848d353fc] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 18:21:47.070240   11602 system_pods.go:89] "nvidia-device-plugin-daemonset-qhkcp" [2d4afd4b-8f05-4a66-aecf-ac6db891b2a7] Running
	I0924 18:21:47.070243   11602 system_pods.go:89] "registry-66c9cd494c-b94p9" [bb39eff0-510f-4e28-b3b7-a246e7ca880c] Running
	I0924 18:21:47.070246   11602 system_pods.go:89] "registry-proxy-wpjp5" [e715cd68-83d0-4850-abc2-b9a3f139e6f8] Running
	I0924 18:21:47.070253   11602 system_pods.go:89] "snapshot-controller-56fcc65765-775tk" [a08ba94c-acb8-4274-8018-b576a56c94f1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:21:47.070257   11602 system_pods.go:89] "snapshot-controller-56fcc65765-q2xbm" [8316fcb5-fc58-46a5-821d-790e06ea09ed] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:21:47.070261   11602 system_pods.go:89] "storage-provisioner" [43a66ff5-32a5-4cdb-9073-da217f1138f1] Running
	I0924 18:21:47.070266   11602 system_pods.go:126] duration metric: took 205.232474ms to wait for k8s-apps to be running ...
	I0924 18:21:47.070273   11602 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 18:21:47.070316   11602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:21:47.087696   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:47.088486   11602 system_svc.go:56] duration metric: took 18.204875ms WaitForService to wait for kubelet
	I0924 18:21:47.088509   11602 kubeadm.go:582] duration metric: took 46.429320046s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:21:47.088529   11602 node_conditions.go:102] verifying NodePressure condition ...
	I0924 18:21:47.266397   11602 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:21:47.266422   11602 node_conditions.go:123] node cpu capacity is 2
	I0924 18:21:47.266433   11602 node_conditions.go:105] duration metric: took 177.899279ms to run NodePressure ...
	I0924 18:21:47.266444   11602 start.go:241] waiting for startup goroutines ...
	I0924 18:21:47.383807   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:47.486627   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:47.592685   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:47.882809   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:47.988953   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:48.088085   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:48.384495   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:48.486920   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:48.587547   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:48.884003   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:48.986521   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:49.089118   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:49.384064   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:49.487365   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:49.586764   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:49.883741   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:49.986565   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:50.086791   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:50.383210   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:50.486863   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:50.586794   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:50.883384   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:50.986147   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:51.087529   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:51.383646   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:51.487904   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:51.587015   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:51.883461   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:51.986235   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:52.087462   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:52.383965   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:52.485684   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:52.586927   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:53.043269   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:53.044081   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:53.086805   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:53.384041   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:53.489996   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:53.588300   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:53.884430   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:53.986023   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:54.088358   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:54.384017   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:54.486355   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:54.587249   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:54.883465   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:54.986368   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:55.088397   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:55.387044   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:55.486136   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:55.587101   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:55.883331   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:55.986435   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:56.086566   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:56.383493   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:56.486431   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:56.587234   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:56.884841   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:56.986911   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:57.088106   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:57.384206   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:57.487256   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:57.587982   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:57.884019   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:57.994140   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:58.095443   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:58.383978   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:58.486983   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:58.587545   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:58.883975   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:58.986500   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:59.087389   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:59.388016   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:59.487717   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:59.591066   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:59.884701   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:59.986927   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:00.089353   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:00.385499   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:00.491326   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:00.586790   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:00.884136   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:00.986787   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:01.089833   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:01.388730   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:01.502425   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:01.597581   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:01.884562   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:01.989808   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:02.089518   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:02.384237   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:02.486541   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:02.587146   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:03.079446   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:03.080120   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:03.087562   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:03.383714   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:03.486549   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:03.587281   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:03.884126   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:03.987082   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:04.094340   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:04.384081   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:04.486442   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:04.586869   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:04.883281   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:04.985346   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:05.086875   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:05.385212   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:05.487246   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:05.587182   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:05.886629   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:05.987975   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:06.087851   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:06.383918   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:06.487588   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:06.587475   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:06.883377   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:06.986090   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:07.087419   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:07.384451   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:07.487315   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:07.588370   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:07.884884   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:07.988441   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:08.088256   11602 kapi.go:107] duration metric: took 59.005743641s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0924 18:22:08.384288   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:08.486671   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:08.883496   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:08.986150   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:09.384140   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:09.486763   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:09.883529   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:09.985845   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:10.383692   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:10.485952   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:10.883625   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:10.986197   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:11.383715   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:11.486007   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:11.883706   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:11.986310   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:12.383898   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:12.485858   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:12.883805   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:12.986764   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:13.385789   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:13.488283   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:13.884377   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:13.987274   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:14.386814   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:14.487614   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:14.884301   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:14.986008   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:15.385093   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:15.486500   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:15.884358   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:15.985775   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:16.383761   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:16.486006   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:16.883791   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:16.986849   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:17.592172   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:17.592689   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:17.883336   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:17.986491   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:18.383313   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:18.485567   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:18.883401   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:18.988696   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:19.384325   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:19.485836   11602 kapi.go:107] duration metric: took 1m11.503845867s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0924 18:22:19.883804   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:20.442509   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:20.884372   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:21.384165   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:21.883778   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:22.383574   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:22.883482   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:23.384312   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:23.884604   11602 kapi.go:107] duration metric: took 1m13.004608549s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0924 18:22:23.886195   11602 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-218885 cluster.
	I0924 18:22:23.887597   11602 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0924 18:22:23.888920   11602 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0924 18:22:23.890409   11602 out.go:177] * Enabled addons: cloud-spanner, metrics-server, ingress-dns, storage-provisioner, inspektor-gadget, nvidia-device-plugin, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0924 18:22:23.891803   11602 addons.go:510] duration metric: took 1m23.232581307s for enable addons: enabled=[cloud-spanner metrics-server ingress-dns storage-provisioner inspektor-gadget nvidia-device-plugin yakd default-storageclass storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0924 18:22:23.891846   11602 start.go:246] waiting for cluster config update ...
	I0924 18:22:23.891861   11602 start.go:255] writing updated cluster config ...
	I0924 18:22:23.892111   11602 ssh_runner.go:195] Run: rm -f paused
	I0924 18:22:23.942645   11602 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 18:22:23.944149   11602 out.go:177] * Done! kubectl is now configured to use "addons-218885" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.445818475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202834445794912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25ca6047-8361-450f-8e25-088f836f97f7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.446314936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ae1cf4c-efef-4b8d-a045-717b2f3a5dc6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.446487135Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ae1cf4c-efef-4b8d-a045-717b2f3a5dc6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.447216374Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74747d3759d103a0a3e685c43e006ddc40846979226acccd1bc86090ce606584,PodSandboxId:4de634f477b2153df6bf5881fd9c39cfb190ac96e690e53ae1c7bb836d1e4379,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727202827448863915,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-6h8qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5dbb2ff2-a88e-47dd-98ff-788c8d9f990b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:337fc816891f4279c782784f0736511194960e61cbf45a008fd0532d15a7508f,PodSandboxId:8513afce7fba1336b2e20f58d26c786b63875bac35237991e38fea09faab1f92,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1727202711273209120,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-5nkmt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 59e6f1f0-361c-4bc4-bdad-ee140581d073,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c6ac506dcfcf9cfa1fff11e877c73a7301129a6ecf5e053b30759b7d99cc78,PodSandboxId:c37c20e05e8843140c81b8018052241f2ed0e7c0fbd6e88c98b7ac0e926a9ade,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727202687633342951,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: dcc9442c-a1e0-46a5-9db8-d027ceac1950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c303d9afee770ae2eda2d8cb9e029e10d46565d5f87e1aacf30f6dad3e3d41cd,PodSandboxId:2c67c0d05137e0ec73851612ff2185170aa276ea320e2cd8f4a6a0a71ef88192,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727202142413103666,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b9jr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 93deada6-273f-48ac-b9de-c15825530c1f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6642ed8d7c9aa2829a8170c360aa3d0cdd4c87a44f08b7a6088af9b6a869c70d,PodSandboxId:7655fd4af0c12638291e37c07ed7931db59cdfd7e954bb54f035c01fa03304ff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1727202114957766742,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8hhkt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d37fb54d-5671-4db7-8e00-385c2d490ff6,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b171331cbfafc2c342a6ac5aba64250fe005cf4f948c34a2e16aa4678afa2ae,PodSandboxId:cceae892b0faf34431b79d0942aac1a7956f8cffd6f573ed5fa218398c06f442,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727202114857420711,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h4fmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3b820a54-d92f-459f-b351-ff53103865df,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e31517907a7d9551c8e9e7375fef9f817b0dae3745b7f7e6a359481276a5fe,PodSandboxId:059713d5f2942e8ede8fc84f87dc8a62a21ac3405d06191bae2cd7462fcdbbd3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48
d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727202093707404343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-pkzn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ed5b0c-3307-4c48-b8dc-666848d353fc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892df4e49ab85492657cc5d1c8404bc9bbdf9a850ff6a877c13f1f0bde448d34,PodSandboxId:d3cf49536a775059e28f2fa79ab3e48e4f327be0dbae35610a556ce41401a89e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fb
da1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727202066494013597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a66ff5-32a5-4cdb-9073-da217f1138f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be47175c23bbde37487f56b66baa578e3060c0189a63f4156cdba80a73738ae,PodSandboxId:f51323ffd92af2f11b2fd105dbd855797a82c930e423375749cf00aa81144f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727202064699970678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wbgv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b793eb56-d95a-49f1-8294-1ab4837d5d36,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05055f26daa39aed87cf7d873b1526912f0d56ac562bd79c217b2c5c135531c3,PodSandboxId:9379757a98736e16ad81c94
43b123800a6ebac50f70eacbfaf35713582566dad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727202062391497092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jsjnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07996bfd-1ae9-4e9c-9148-14966458de66,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5872d2d84daecbb7286168900d232aaa3de8d6e2a7efd42d1a21e79e7716fbef,PodSandboxId:a466770b867e2fc0fa51cb8add999d3390779206baa1cb35a345264f
62e6c93d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727202051179093922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58f7cca19e2f3ff0b4c700a54c6c183,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aed06020fea288161b94609946ad26fe5fb9c066d4e615a8ce8107d8e36cb5,PodSandboxId:5b729a73d998d0556f4387c9284e0af77999474f4c03ea46ed8185ac3f9119f5,Metadata
:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727202051180588384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da642d5c94c7507578558cbed0fc241c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45900bdb84120d2b0e5a5dcb15a77d3adc41436c4a8c297983d2dc3c3e33a93,PodSandboxId:f744f09f310f16d040ffd6b58c12920fdfadee9f122e2dcd163800338f47d777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt
:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727202051150395297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6442ad70ea7295b7e243b2fa7ca3de8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176b7e7ab3b8a771f88df4a1857f2fe15185c86635661cfc3d36f9a276a729de,PodSandboxId:8363686ba5d19700b174e56d4d3ac206d9a10b4586cc9fe13b9cbed3d0656fa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Ima
ge:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727202051107614012,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc93ecbe7d2f9eee0e6aa527b58ce9c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ae1cf4c-efef-4b8d-a045-717b2f3a5dc6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.482402961Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=922e036c-7632-4d69-89b4-a6f0f82cfc54 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.482484510Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=922e036c-7632-4d69-89b4-a6f0f82cfc54 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.483967045Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb85fe63-af11-4232-a61f-870cea0894c4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.485193057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202834485167967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb85fe63-af11-4232-a61f-870cea0894c4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.485802784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f71ff1d-8c6b-4173-a721-660765d8aa53 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.485862083Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f71ff1d-8c6b-4173-a721-660765d8aa53 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.486287154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74747d3759d103a0a3e685c43e006ddc40846979226acccd1bc86090ce606584,PodSandboxId:4de634f477b2153df6bf5881fd9c39cfb190ac96e690e53ae1c7bb836d1e4379,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727202827448863915,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-6h8qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5dbb2ff2-a88e-47dd-98ff-788c8d9f990b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:337fc816891f4279c782784f0736511194960e61cbf45a008fd0532d15a7508f,PodSandboxId:8513afce7fba1336b2e20f58d26c786b63875bac35237991e38fea09faab1f92,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1727202711273209120,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-5nkmt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 59e6f1f0-361c-4bc4-bdad-ee140581d073,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c6ac506dcfcf9cfa1fff11e877c73a7301129a6ecf5e053b30759b7d99cc78,PodSandboxId:c37c20e05e8843140c81b8018052241f2ed0e7c0fbd6e88c98b7ac0e926a9ade,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727202687633342951,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: dcc9442c-a1e0-46a5-9db8-d027ceac1950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c303d9afee770ae2eda2d8cb9e029e10d46565d5f87e1aacf30f6dad3e3d41cd,PodSandboxId:2c67c0d05137e0ec73851612ff2185170aa276ea320e2cd8f4a6a0a71ef88192,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727202142413103666,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b9jr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 93deada6-273f-48ac-b9de-c15825530c1f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6642ed8d7c9aa2829a8170c360aa3d0cdd4c87a44f08b7a6088af9b6a869c70d,PodSandboxId:7655fd4af0c12638291e37c07ed7931db59cdfd7e954bb54f035c01fa03304ff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1727202114957766742,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8hhkt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d37fb54d-5671-4db7-8e00-385c2d490ff6,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b171331cbfafc2c342a6ac5aba64250fe005cf4f948c34a2e16aa4678afa2ae,PodSandboxId:cceae892b0faf34431b79d0942aac1a7956f8cffd6f573ed5fa218398c06f442,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727202114857420711,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h4fmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3b820a54-d92f-459f-b351-ff53103865df,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e31517907a7d9551c8e9e7375fef9f817b0dae3745b7f7e6a359481276a5fe,PodSandboxId:059713d5f2942e8ede8fc84f87dc8a62a21ac3405d06191bae2cd7462fcdbbd3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48
d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727202093707404343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-pkzn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ed5b0c-3307-4c48-b8dc-666848d353fc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892df4e49ab85492657cc5d1c8404bc9bbdf9a850ff6a877c13f1f0bde448d34,PodSandboxId:d3cf49536a775059e28f2fa79ab3e48e4f327be0dbae35610a556ce41401a89e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fb
da1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727202066494013597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a66ff5-32a5-4cdb-9073-da217f1138f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be47175c23bbde37487f56b66baa578e3060c0189a63f4156cdba80a73738ae,PodSandboxId:f51323ffd92af2f11b2fd105dbd855797a82c930e423375749cf00aa81144f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727202064699970678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wbgv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b793eb56-d95a-49f1-8294-1ab4837d5d36,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05055f26daa39aed87cf7d873b1526912f0d56ac562bd79c217b2c5c135531c3,PodSandboxId:9379757a98736e16ad81c94
43b123800a6ebac50f70eacbfaf35713582566dad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727202062391497092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jsjnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07996bfd-1ae9-4e9c-9148-14966458de66,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5872d2d84daecbb7286168900d232aaa3de8d6e2a7efd42d1a21e79e7716fbef,PodSandboxId:a466770b867e2fc0fa51cb8add999d3390779206baa1cb35a345264f
62e6c93d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727202051179093922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58f7cca19e2f3ff0b4c700a54c6c183,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aed06020fea288161b94609946ad26fe5fb9c066d4e615a8ce8107d8e36cb5,PodSandboxId:5b729a73d998d0556f4387c9284e0af77999474f4c03ea46ed8185ac3f9119f5,Metadata
:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727202051180588384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da642d5c94c7507578558cbed0fc241c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45900bdb84120d2b0e5a5dcb15a77d3adc41436c4a8c297983d2dc3c3e33a93,PodSandboxId:f744f09f310f16d040ffd6b58c12920fdfadee9f122e2dcd163800338f47d777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt
:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727202051150395297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6442ad70ea7295b7e243b2fa7ca3de8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176b7e7ab3b8a771f88df4a1857f2fe15185c86635661cfc3d36f9a276a729de,PodSandboxId:8363686ba5d19700b174e56d4d3ac206d9a10b4586cc9fe13b9cbed3d0656fa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Ima
ge:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727202051107614012,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc93ecbe7d2f9eee0e6aa527b58ce9c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f71ff1d-8c6b-4173-a721-660765d8aa53 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.517785227Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=283035b0-6541-4fcb-92da-fae9903b1690 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.517859429Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=283035b0-6541-4fcb-92da-fae9903b1690 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.519054649Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca50ceda-7515-41cc-b094-e8389fb22528 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.520229235Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202834520203693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca50ceda-7515-41cc-b094-e8389fb22528 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.520726844Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32033e05-8021-4b49-b2f8-addac33f0756 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.520786104Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32033e05-8021-4b49-b2f8-addac33f0756 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.521170535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74747d3759d103a0a3e685c43e006ddc40846979226acccd1bc86090ce606584,PodSandboxId:4de634f477b2153df6bf5881fd9c39cfb190ac96e690e53ae1c7bb836d1e4379,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727202827448863915,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-6h8qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5dbb2ff2-a88e-47dd-98ff-788c8d9f990b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:337fc816891f4279c782784f0736511194960e61cbf45a008fd0532d15a7508f,PodSandboxId:8513afce7fba1336b2e20f58d26c786b63875bac35237991e38fea09faab1f92,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1727202711273209120,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-5nkmt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 59e6f1f0-361c-4bc4-bdad-ee140581d073,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c6ac506dcfcf9cfa1fff11e877c73a7301129a6ecf5e053b30759b7d99cc78,PodSandboxId:c37c20e05e8843140c81b8018052241f2ed0e7c0fbd6e88c98b7ac0e926a9ade,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727202687633342951,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: dcc9442c-a1e0-46a5-9db8-d027ceac1950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c303d9afee770ae2eda2d8cb9e029e10d46565d5f87e1aacf30f6dad3e3d41cd,PodSandboxId:2c67c0d05137e0ec73851612ff2185170aa276ea320e2cd8f4a6a0a71ef88192,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727202142413103666,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b9jr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 93deada6-273f-48ac-b9de-c15825530c1f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6642ed8d7c9aa2829a8170c360aa3d0cdd4c87a44f08b7a6088af9b6a869c70d,PodSandboxId:7655fd4af0c12638291e37c07ed7931db59cdfd7e954bb54f035c01fa03304ff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1727202114957766742,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8hhkt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d37fb54d-5671-4db7-8e00-385c2d490ff6,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b171331cbfafc2c342a6ac5aba64250fe005cf4f948c34a2e16aa4678afa2ae,PodSandboxId:cceae892b0faf34431b79d0942aac1a7956f8cffd6f573ed5fa218398c06f442,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727202114857420711,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h4fmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3b820a54-d92f-459f-b351-ff53103865df,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e31517907a7d9551c8e9e7375fef9f817b0dae3745b7f7e6a359481276a5fe,PodSandboxId:059713d5f2942e8ede8fc84f87dc8a62a21ac3405d06191bae2cd7462fcdbbd3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48
d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727202093707404343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-pkzn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ed5b0c-3307-4c48-b8dc-666848d353fc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892df4e49ab85492657cc5d1c8404bc9bbdf9a850ff6a877c13f1f0bde448d34,PodSandboxId:d3cf49536a775059e28f2fa79ab3e48e4f327be0dbae35610a556ce41401a89e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fb
da1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727202066494013597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a66ff5-32a5-4cdb-9073-da217f1138f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be47175c23bbde37487f56b66baa578e3060c0189a63f4156cdba80a73738ae,PodSandboxId:f51323ffd92af2f11b2fd105dbd855797a82c930e423375749cf00aa81144f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727202064699970678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wbgv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b793eb56-d95a-49f1-8294-1ab4837d5d36,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05055f26daa39aed87cf7d873b1526912f0d56ac562bd79c217b2c5c135531c3,PodSandboxId:9379757a98736e16ad81c94
43b123800a6ebac50f70eacbfaf35713582566dad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727202062391497092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jsjnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07996bfd-1ae9-4e9c-9148-14966458de66,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5872d2d84daecbb7286168900d232aaa3de8d6e2a7efd42d1a21e79e7716fbef,PodSandboxId:a466770b867e2fc0fa51cb8add999d3390779206baa1cb35a345264f
62e6c93d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727202051179093922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58f7cca19e2f3ff0b4c700a54c6c183,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aed06020fea288161b94609946ad26fe5fb9c066d4e615a8ce8107d8e36cb5,PodSandboxId:5b729a73d998d0556f4387c9284e0af77999474f4c03ea46ed8185ac3f9119f5,Metadata
:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727202051180588384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da642d5c94c7507578558cbed0fc241c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45900bdb84120d2b0e5a5dcb15a77d3adc41436c4a8c297983d2dc3c3e33a93,PodSandboxId:f744f09f310f16d040ffd6b58c12920fdfadee9f122e2dcd163800338f47d777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt
:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727202051150395297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6442ad70ea7295b7e243b2fa7ca3de8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176b7e7ab3b8a771f88df4a1857f2fe15185c86635661cfc3d36f9a276a729de,PodSandboxId:8363686ba5d19700b174e56d4d3ac206d9a10b4586cc9fe13b9cbed3d0656fa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Ima
ge:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727202051107614012,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc93ecbe7d2f9eee0e6aa527b58ce9c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32033e05-8021-4b49-b2f8-addac33f0756 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.556798987Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26567384-7b93-445c-a7d1-ecc24ad3636e name=/runtime.v1.RuntimeService/Version
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.556919979Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26567384-7b93-445c-a7d1-ecc24ad3636e name=/runtime.v1.RuntimeService/Version
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.558317614Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=05f872c0-2759-4da3-9477-3e18eb5006fb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.559480309Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202834559453139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05f872c0-2759-4da3-9477-3e18eb5006fb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.560041951Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d00e7ab8-80c6-4096-aff8-94f1114e48cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.560095804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d00e7ab8-80c6-4096-aff8-94f1114e48cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:33:54 addons-218885 crio[662]: time="2024-09-24 18:33:54.560722929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74747d3759d103a0a3e685c43e006ddc40846979226acccd1bc86090ce606584,PodSandboxId:4de634f477b2153df6bf5881fd9c39cfb190ac96e690e53ae1c7bb836d1e4379,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727202827448863915,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-6h8qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5dbb2ff2-a88e-47dd-98ff-788c8d9f990b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:337fc816891f4279c782784f0736511194960e61cbf45a008fd0532d15a7508f,PodSandboxId:8513afce7fba1336b2e20f58d26c786b63875bac35237991e38fea09faab1f92,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1727202711273209120,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-5nkmt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 59e6f1f0-361c-4bc4-bdad-ee140581d073,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c6ac506dcfcf9cfa1fff11e877c73a7301129a6ecf5e053b30759b7d99cc78,PodSandboxId:c37c20e05e8843140c81b8018052241f2ed0e7c0fbd6e88c98b7ac0e926a9ade,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727202687633342951,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: dcc9442c-a1e0-46a5-9db8-d027ceac1950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c303d9afee770ae2eda2d8cb9e029e10d46565d5f87e1aacf30f6dad3e3d41cd,PodSandboxId:2c67c0d05137e0ec73851612ff2185170aa276ea320e2cd8f4a6a0a71ef88192,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727202142413103666,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b9jr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 93deada6-273f-48ac-b9de-c15825530c1f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6642ed8d7c9aa2829a8170c360aa3d0cdd4c87a44f08b7a6088af9b6a869c70d,PodSandboxId:7655fd4af0c12638291e37c07ed7931db59cdfd7e954bb54f035c01fa03304ff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1727202114957766742,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8hhkt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d37fb54d-5671-4db7-8e00-385c2d490ff6,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b171331cbfafc2c342a6ac5aba64250fe005cf4f948c34a2e16aa4678afa2ae,PodSandboxId:cceae892b0faf34431b79d0942aac1a7956f8cffd6f573ed5fa218398c06f442,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727202114857420711,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h4fmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3b820a54-d92f-459f-b351-ff53103865df,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e31517907a7d9551c8e9e7375fef9f817b0dae3745b7f7e6a359481276a5fe,PodSandboxId:059713d5f2942e8ede8fc84f87dc8a62a21ac3405d06191bae2cd7462fcdbbd3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48
d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727202093707404343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-pkzn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ed5b0c-3307-4c48-b8dc-666848d353fc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892df4e49ab85492657cc5d1c8404bc9bbdf9a850ff6a877c13f1f0bde448d34,PodSandboxId:d3cf49536a775059e28f2fa79ab3e48e4f327be0dbae35610a556ce41401a89e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fb
da1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727202066494013597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a66ff5-32a5-4cdb-9073-da217f1138f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be47175c23bbde37487f56b66baa578e3060c0189a63f4156cdba80a73738ae,PodSandboxId:f51323ffd92af2f11b2fd105dbd855797a82c930e423375749cf00aa81144f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727202064699970678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wbgv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b793eb56-d95a-49f1-8294-1ab4837d5d36,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05055f26daa39aed87cf7d873b1526912f0d56ac562bd79c217b2c5c135531c3,PodSandboxId:9379757a98736e16ad81c94
43b123800a6ebac50f70eacbfaf35713582566dad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727202062391497092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jsjnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07996bfd-1ae9-4e9c-9148-14966458de66,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5872d2d84daecbb7286168900d232aaa3de8d6e2a7efd42d1a21e79e7716fbef,PodSandboxId:a466770b867e2fc0fa51cb8add999d3390779206baa1cb35a345264f
62e6c93d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727202051179093922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58f7cca19e2f3ff0b4c700a54c6c183,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aed06020fea288161b94609946ad26fe5fb9c066d4e615a8ce8107d8e36cb5,PodSandboxId:5b729a73d998d0556f4387c9284e0af77999474f4c03ea46ed8185ac3f9119f5,Metadata
:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727202051180588384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da642d5c94c7507578558cbed0fc241c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45900bdb84120d2b0e5a5dcb15a77d3adc41436c4a8c297983d2dc3c3e33a93,PodSandboxId:f744f09f310f16d040ffd6b58c12920fdfadee9f122e2dcd163800338f47d777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt
:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727202051150395297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6442ad70ea7295b7e243b2fa7ca3de8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176b7e7ab3b8a771f88df4a1857f2fe15185c86635661cfc3d36f9a276a729de,PodSandboxId:8363686ba5d19700b174e56d4d3ac206d9a10b4586cc9fe13b9cbed3d0656fa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Ima
ge:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727202051107614012,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc93ecbe7d2f9eee0e6aa527b58ce9c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d00e7ab8-80c6-4096-aff8-94f1114e48cc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	74747d3759d10       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   4de634f477b21       hello-world-app-55bf9c44b4-6h8qp
	337fc816891f4       ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a                        2 minutes ago       Running             headlamp                  0                   8513afce7fba1       headlamp-7b5c95b59d-5nkmt
	d6c6ac506dcfc       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   c37c20e05e884       nginx
	c303d9afee770       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   2c67c0d05137e       gcp-auth-89d5ffd79-b9jr2
	6642ed8d7c9aa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              patch                     0                   7655fd4af0c12       ingress-nginx-admission-patch-8hhkt
	2b171331cbfaf       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              create                    0                   cceae892b0faf       ingress-nginx-admission-create-h4fmh
	70e31517907a7       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   059713d5f2942       metrics-server-84c5f94fbc-pkzn4
	892df4e49ab85       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   d3cf49536a775       storage-provisioner
	7be47175c23bb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             12 minutes ago      Running             coredns                   0                   f51323ffd92af       coredns-7c65d6cfc9-wbgv9
	05055f26daa39       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             12 minutes ago      Running             kube-proxy                0                   9379757a98736       kube-proxy-jsjnj
	01aed06020fea       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   5b729a73d998d       etcd-addons-218885
	5872d2d84daec       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   a466770b867e2       kube-scheduler-addons-218885
	b45900bdb8412       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   f744f09f310f1       kube-apiserver-addons-218885
	176b7e7ab3b8a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   8363686ba5d19       kube-controller-manager-addons-218885
	
	
	==> coredns [7be47175c23bbde37487f56b66baa578e3060c0189a63f4156cdba80a73738ae] <==
	Trace[220166093]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:21:35.643)
	Trace[220166093]: [30.000976871s] [30.000976871s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37772 - 57440 "HINFO IN 3713161987249755073.4462496746838402409. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019354444s
	[INFO] 10.244.0.7:47836 - 60886 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000305804s
	[INFO] 10.244.0.7:47836 - 53458 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110924s
	[INFO] 10.244.0.7:54106 - 25653 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000141925s
	[INFO] 10.244.0.7:54106 - 53304 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000103244s
	[INFO] 10.244.0.7:50681 - 41048 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104188s
	[INFO] 10.244.0.7:50681 - 16991 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007015s
	[INFO] 10.244.0.7:52606 - 52990 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072691s
	[INFO] 10.244.0.7:52606 - 39420 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000217542s
	[INFO] 10.244.0.7:60763 - 62338 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000049324s
	[INFO] 10.244.0.7:60763 - 35968 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000041375s
	[INFO] 10.244.0.21:36769 - 28384 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000276065s
	[INFO] 10.244.0.21:57042 - 58510 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000133543s
	[INFO] 10.244.0.21:58980 - 33022 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000095524s
	[INFO] 10.244.0.21:60903 - 3777 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000078686s
	[INFO] 10.244.0.21:36852 - 36641 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00007641s
	[INFO] 10.244.0.21:41780 - 8788 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082769s
	[INFO] 10.244.0.21:33291 - 39478 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000959689s
	[INFO] 10.244.0.21:40331 - 57937 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000913308s
	
	
	==> describe nodes <==
	Name:               addons-218885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-218885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=addons-218885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T18_20_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-218885
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:20:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-218885
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:33:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:32:00 +0000   Tue, 24 Sep 2024 18:20:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:32:00 +0000   Tue, 24 Sep 2024 18:20:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:32:00 +0000   Tue, 24 Sep 2024 18:20:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:32:00 +0000   Tue, 24 Sep 2024 18:20:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    addons-218885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a62f96b82b1423cb3ca4a7e749331c6
	  System UUID:                5a62f96b-82b1-423c-b3ca-4a7e749331c6
	  Boot ID:                    98ef14c8-41cc-4a65-8db8-db6c1413a40a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-6h8qp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  gcp-auth                    gcp-auth-89d5ffd79-b9jr2                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  headlamp                    headlamp-7b5c95b59d-5nkmt                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 coredns-7c65d6cfc9-wbgv9                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-218885                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-218885             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-218885    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-jsjnj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-218885             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-pkzn4          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-218885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-218885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-218885 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-218885 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-218885 event: Registered Node addons-218885 in Controller
	
	
	==> dmesg <==
	[  +6.289241] kauditd_printk_skb: 132 callbacks suppressed
	[ +10.551408] kauditd_printk_skb: 15 callbacks suppressed
	[ +15.866552] kauditd_printk_skb: 11 callbacks suppressed
	[ +13.601209] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.093860] kauditd_printk_skb: 38 callbacks suppressed
	[Sep24 18:22] kauditd_printk_skb: 80 callbacks suppressed
	[  +6.788661] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.792651] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.642866] kauditd_printk_skb: 43 callbacks suppressed
	[  +7.574267] kauditd_printk_skb: 3 callbacks suppressed
	[Sep24 18:23] kauditd_printk_skb: 28 callbacks suppressed
	[Sep24 18:24] kauditd_printk_skb: 28 callbacks suppressed
	[Sep24 18:27] kauditd_printk_skb: 28 callbacks suppressed
	[Sep24 18:30] kauditd_printk_skb: 28 callbacks suppressed
	[ +12.411074] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.529390] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.909274] kauditd_printk_skb: 20 callbacks suppressed
	[Sep24 18:31] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.163684] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.109651] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.771905] kauditd_printk_skb: 6 callbacks suppressed
	[ +10.406957] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.115904] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.233753] kauditd_printk_skb: 16 callbacks suppressed
	[Sep24 18:33] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [01aed06020fea288161b94609946ad26fe5fb9c066d4e615a8ce8107d8e36cb5] <==
	{"level":"info","ts":"2024-09-24T18:22:03.062928Z","caller":"traceutil/trace.go:171","msg":"trace[2098326213] transaction","detail":"{read_only:false; response_revision:1044; number_of_response:1; }","duration":"283.243224ms","start":"2024-09-24T18:22:02.779454Z","end":"2024-09-24T18:22:03.062698Z","steps":["trace[2098326213] 'process raft request'  (duration: 282.828061ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:22:03.063637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.191701ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:22:03.063713Z","caller":"traceutil/trace.go:171","msg":"trace[936037991] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1044; }","duration":"190.244747ms","start":"2024-09-24T18:22:02.873426Z","end":"2024-09-24T18:22:03.063671Z","steps":["trace[936037991] 'agreement among raft nodes before linearized reading'  (duration: 190.034984ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:22:17.577995Z","caller":"traceutil/trace.go:171","msg":"trace[1652387929] linearizableReadLoop","detail":"{readStateIndex:1137; appliedIndex:1136; }","duration":"205.221262ms","start":"2024-09-24T18:22:17.372753Z","end":"2024-09-24T18:22:17.577975Z","steps":["trace[1652387929] 'read index received'  (duration: 205.01138ms)","trace[1652387929] 'applied index is now lower than readState.Index'  (duration: 209.231µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T18:22:17.578361Z","caller":"traceutil/trace.go:171","msg":"trace[1017875941] transaction","detail":"{read_only:false; response_revision:1105; number_of_response:1; }","duration":"420.198995ms","start":"2024-09-24T18:22:17.158143Z","end":"2024-09-24T18:22:17.578342Z","steps":["trace[1017875941] 'process raft request'  (duration: 419.668298ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:22:17.578543Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T18:22:17.158125Z","time spent":"420.358099ms","remote":"127.0.0.1:37844","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1101 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-24T18:22:17.578615Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.164866ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:22:17.578661Z","caller":"traceutil/trace.go:171","msg":"trace[1038728676] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"104.208566ms","start":"2024-09-24T18:22:17.474443Z","end":"2024-09-24T18:22:17.578651Z","steps":["trace[1038728676] 'agreement among raft nodes before linearized reading'  (duration: 104.147281ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:22:17.578424Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.649301ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:22:17.578829Z","caller":"traceutil/trace.go:171","msg":"trace[2133008651] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"206.08201ms","start":"2024-09-24T18:22:17.372738Z","end":"2024-09-24T18:22:17.578820Z","steps":["trace[2133008651] 'agreement among raft nodes before linearized reading'  (duration: 205.618799ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:30:51.489118Z","caller":"traceutil/trace.go:171","msg":"trace[2026224945] linearizableReadLoop","detail":"{readStateIndex:2186; appliedIndex:2185; }","duration":"263.239917ms","start":"2024-09-24T18:30:51.225863Z","end":"2024-09-24T18:30:51.489103Z","steps":["trace[2026224945] 'read index received'  (duration: 263.063528ms)","trace[2026224945] 'applied index is now lower than readState.Index'  (duration: 175.847µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T18:30:51.489403Z","caller":"traceutil/trace.go:171","msg":"trace[588894328] transaction","detail":"{read_only:false; response_revision:2041; number_of_response:1; }","duration":"264.853791ms","start":"2024-09-24T18:30:51.224537Z","end":"2024-09-24T18:30:51.489390Z","steps":["trace[588894328] 'process raft request'  (duration: 264.431123ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:30:51.489597Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.718428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/registry-test.17f841a5c5f6a88e\" ","response":"range_response_count:1 size:727"}
	{"level":"info","ts":"2024-09-24T18:30:51.489618Z","caller":"traceutil/trace.go:171","msg":"trace[141298160] range","detail":"{range_begin:/registry/events/default/registry-test.17f841a5c5f6a88e; range_end:; response_count:1; response_revision:2041; }","duration":"263.752191ms","start":"2024-09-24T18:30:51.225860Z","end":"2024-09-24T18:30:51.489612Z","steps":["trace[141298160] 'agreement among raft nodes before linearized reading'  (duration: 263.661857ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:30:51.489708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.1176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:30:51.489721Z","caller":"traceutil/trace.go:171","msg":"trace[905351139] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2041; }","duration":"149.1321ms","start":"2024-09-24T18:30:51.340585Z","end":"2024-09-24T18:30:51.489717Z","steps":["trace[905351139] 'agreement among raft nodes before linearized reading'  (duration: 149.10741ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:30:52.421183Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1524}
	{"level":"info","ts":"2024-09-24T18:30:52.455466Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1524,"took":"33.86366ms","hash":2015250619,"current-db-size-bytes":6524928,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3493888,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-24T18:30:52.455576Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2015250619,"revision":1524,"compact-revision":-1}
	{"level":"info","ts":"2024-09-24T18:31:11.923147Z","caller":"traceutil/trace.go:171","msg":"trace[1594584679] linearizableReadLoop","detail":"{readStateIndex:2418; appliedIndex:2417; }","duration":"180.184095ms","start":"2024-09-24T18:31:11.742947Z","end":"2024-09-24T18:31:11.923131Z","steps":["trace[1594584679] 'read index received'  (duration: 179.421282ms)","trace[1594584679] 'applied index is now lower than readState.Index'  (duration: 762.207µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T18:31:11.923267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.306652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-external-health-monitor-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:31:11.923304Z","caller":"traceutil/trace.go:171","msg":"trace[669207299] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-external-health-monitor-controller; range_end:; response_count:0; response_revision:2266; }","duration":"180.352098ms","start":"2024-09-24T18:31:11.742942Z","end":"2024-09-24T18:31:11.923294Z","steps":["trace[669207299] 'agreement among raft nodes before linearized reading'  (duration: 180.263747ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:31:11.923415Z","caller":"traceutil/trace.go:171","msg":"trace[1440776756] transaction","detail":"{read_only:false; response_revision:2266; number_of_response:1; }","duration":"324.268386ms","start":"2024-09-24T18:31:11.599140Z","end":"2024-09-24T18:31:11.923409Z","steps":["trace[1440776756] 'process raft request'  (duration: 323.264023ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:31:11.923487Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T18:31:11.599123Z","time spent":"324.320186ms","remote":"127.0.0.1:38080","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":696,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/csinodes/addons-218885\" mod_revision:1071 > success:<request_put:<key:\"/registry/csinodes/addons-218885\" value_size:656 >> failure:<request_range:<key:\"/registry/csinodes/addons-218885\" > >"}
	{"level":"info","ts":"2024-09-24T18:31:56.526515Z","caller":"traceutil/trace.go:171","msg":"trace[1426180008] transaction","detail":"{read_only:false; response_revision:2458; number_of_response:1; }","duration":"162.452632ms","start":"2024-09-24T18:31:56.364044Z","end":"2024-09-24T18:31:56.526496Z","steps":["trace[1426180008] 'process raft request'  (duration: 162.338027ms)"],"step_count":1}
	
	
	==> gcp-auth [c303d9afee770ae2eda2d8cb9e029e10d46565d5f87e1aacf30f6dad3e3d41cd] <==
	2024/09/24 18:22:24 Ready to write response ...
	2024/09/24 18:22:24 Ready to marshal response ...
	2024/09/24 18:22:24 Ready to write response ...
	2024/09/24 18:30:36 Ready to marshal response ...
	2024/09/24 18:30:36 Ready to write response ...
	2024/09/24 18:30:45 Ready to marshal response ...
	2024/09/24 18:30:45 Ready to write response ...
	2024/09/24 18:30:50 Ready to marshal response ...
	2024/09/24 18:30:50 Ready to write response ...
	2024/09/24 18:30:50 Ready to marshal response ...
	2024/09/24 18:30:50 Ready to write response ...
	2024/09/24 18:31:02 Ready to marshal response ...
	2024/09/24 18:31:02 Ready to write response ...
	2024/09/24 18:31:03 Ready to marshal response ...
	2024/09/24 18:31:03 Ready to write response ...
	2024/09/24 18:31:23 Ready to marshal response ...
	2024/09/24 18:31:23 Ready to write response ...
	2024/09/24 18:31:45 Ready to marshal response ...
	2024/09/24 18:31:45 Ready to write response ...
	2024/09/24 18:31:45 Ready to marshal response ...
	2024/09/24 18:31:45 Ready to write response ...
	2024/09/24 18:31:45 Ready to marshal response ...
	2024/09/24 18:31:45 Ready to write response ...
	2024/09/24 18:33:44 Ready to marshal response ...
	2024/09/24 18:33:44 Ready to write response ...
	
	
	==> kernel <==
	 18:33:54 up 13 min,  0 users,  load average: 0.47, 0.65, 0.52
	Linux addons-218885 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b45900bdb84120d2b0e5a5dcb15a77d3adc41436c4a8c297983d2dc3c3e33a93] <==
	I0924 18:31:17.549570       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0924 18:31:17.551996       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0924 18:31:17.583859       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0924 18:31:17.583989       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0924 18:31:17.635304       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0924 18:31:17.635349       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0924 18:31:18.584648       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0924 18:31:18.636057       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0924 18:31:18.663553       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E0924 18:31:19.176932       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0924 18:31:23.694509       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0924 18:31:23.876326       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.166.187"}
	E0924 18:31:24.791416       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:25.798413       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:26.805249       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:27.812198       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:28.819268       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:29.826573       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:30.833568       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:31.840586       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:32.848358       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:33.854844       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0924 18:31:45.893080       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.75.91"}
	I0924 18:33:45.006000       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.155.188"}
	E0924 18:33:46.752766       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [176b7e7ab3b8a771f88df4a1857f2fe15185c86635661cfc3d36f9a276a729de] <==
	W0924 18:32:30.542229       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:32:30.542409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:32:37.625278       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:32:37.625328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:32:43.373821       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:32:43.373993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:32:58.494047       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:32:58.494165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:33:27.139930       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:33:27.140041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:33:27.234777       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:33:27.234828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:33:27.259418       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:33:27.259464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:33:31.789311       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:33:31.789360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0924 18:33:44.819646       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="29.470724ms"
	I0924 18:33:44.848368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="28.489559ms"
	I0924 18:33:44.858046       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.628312ms"
	I0924 18:33:44.858177       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="37.535µs"
	I0924 18:33:46.671989       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0924 18:33:46.682402       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="5.075µs"
	I0924 18:33:46.682779       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0924 18:33:48.067219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.765812ms"
	I0924 18:33:48.067299       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="41.868µs"
	
	
	==> kube-proxy [05055f26daa39aed87cf7d873b1526912f0d56ac562bd79c217b2c5c135531c3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 18:21:04.310826       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 18:21:04.382306       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.215"]
	E0924 18:21:04.382374       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 18:21:05.227657       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 18:21:05.227715       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 18:21:05.227740       1 server_linux.go:169] "Using iptables Proxier"
	I0924 18:21:05.641037       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 18:21:05.641385       1 server.go:483] "Version info" version="v1.31.1"
	I0924 18:21:05.641397       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:21:05.651378       1 config.go:199] "Starting service config controller"
	I0924 18:21:05.651407       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 18:21:05.651432       1 config.go:105] "Starting endpoint slice config controller"
	I0924 18:21:05.651436       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 18:21:05.651985       1 config.go:328] "Starting node config controller"
	I0924 18:21:05.651993       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 18:21:05.751639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 18:21:05.751676       1 shared_informer.go:320] Caches are synced for service config
	I0924 18:21:05.760267       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5872d2d84daecbb7286168900d232aaa3de8d6e2a7efd42d1a21e79e7716fbef] <==
	W0924 18:20:53.710329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 18:20:53.710356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:53.710416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 18:20:53.710442       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:53.710520       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 18:20:53.710546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:53.712017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 18:20:53.712081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.523135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 18:20:54.523250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.558496       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 18:20:54.558598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.602148       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 18:20:54.602194       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 18:20:54.615690       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 18:20:54.616117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.623597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0924 18:20:54.623684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.642634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 18:20:54.643012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.652972       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 18:20:54.653082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.764823       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0924 18:20:54.764896       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0924 18:20:57.293683       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 18:33:44 addons-218885 kubelet[1212]: I0924 18:33:44.880168    1212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2hg8\" (UniqueName: \"kubernetes.io/projected/5dbb2ff2-a88e-47dd-98ff-788c8d9f990b-kube-api-access-b2hg8\") pod \"hello-world-app-55bf9c44b4-6h8qp\" (UID: \"5dbb2ff2-a88e-47dd-98ff-788c8d9f990b\") " pod="default/hello-world-app-55bf9c44b4-6h8qp"
	Sep 24 18:33:45 addons-218885 kubelet[1212]: I0924 18:33:45.987666    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnktr\" (UniqueName: \"kubernetes.io/projected/209a83c9-7b47-44e1-8897-682ab287a114-kube-api-access-xnktr\") pod \"209a83c9-7b47-44e1-8897-682ab287a114\" (UID: \"209a83c9-7b47-44e1-8897-682ab287a114\") "
	Sep 24 18:33:45 addons-218885 kubelet[1212]: I0924 18:33:45.990136    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/209a83c9-7b47-44e1-8897-682ab287a114-kube-api-access-xnktr" (OuterVolumeSpecName: "kube-api-access-xnktr") pod "209a83c9-7b47-44e1-8897-682ab287a114" (UID: "209a83c9-7b47-44e1-8897-682ab287a114"). InnerVolumeSpecName "kube-api-access-xnktr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 24 18:33:46 addons-218885 kubelet[1212]: I0924 18:33:46.035002    1212 scope.go:117] "RemoveContainer" containerID="4fa7ef575957e15ae50cd88471a7f48bc5a2e72eec5260f95ac625aa86485bec"
	Sep 24 18:33:46 addons-218885 kubelet[1212]: I0924 18:33:46.078219    1212 scope.go:117] "RemoveContainer" containerID="4fa7ef575957e15ae50cd88471a7f48bc5a2e72eec5260f95ac625aa86485bec"
	Sep 24 18:33:46 addons-218885 kubelet[1212]: E0924 18:33:46.078735    1212 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fa7ef575957e15ae50cd88471a7f48bc5a2e72eec5260f95ac625aa86485bec\": container with ID starting with 4fa7ef575957e15ae50cd88471a7f48bc5a2e72eec5260f95ac625aa86485bec not found: ID does not exist" containerID="4fa7ef575957e15ae50cd88471a7f48bc5a2e72eec5260f95ac625aa86485bec"
	Sep 24 18:33:46 addons-218885 kubelet[1212]: I0924 18:33:46.078767    1212 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fa7ef575957e15ae50cd88471a7f48bc5a2e72eec5260f95ac625aa86485bec"} err="failed to get container status \"4fa7ef575957e15ae50cd88471a7f48bc5a2e72eec5260f95ac625aa86485bec\": rpc error: code = NotFound desc = could not find container \"4fa7ef575957e15ae50cd88471a7f48bc5a2e72eec5260f95ac625aa86485bec\": container with ID starting with 4fa7ef575957e15ae50cd88471a7f48bc5a2e72eec5260f95ac625aa86485bec not found: ID does not exist"
	Sep 24 18:33:46 addons-218885 kubelet[1212]: I0924 18:33:46.088646    1212 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xnktr\" (UniqueName: \"kubernetes.io/projected/209a83c9-7b47-44e1-8897-682ab287a114-kube-api-access-xnktr\") on node \"addons-218885\" DevicePath \"\""
	Sep 24 18:33:46 addons-218885 kubelet[1212]: I0924 18:33:46.223593    1212 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="209a83c9-7b47-44e1-8897-682ab287a114" path="/var/lib/kubelet/pods/209a83c9-7b47-44e1-8897-682ab287a114/volumes"
	Sep 24 18:33:46 addons-218885 kubelet[1212]: E0924 18:33:46.226842    1212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b78ade15-29d4-44d6-bef8-3a957b847bb0"
	Sep 24 18:33:46 addons-218885 kubelet[1212]: E0924 18:33:46.898183    1212 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202826897786869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:550632,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:33:46 addons-218885 kubelet[1212]: E0924 18:33:46.898219    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202826897786869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:550632,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:33:48 addons-218885 kubelet[1212]: I0924 18:33:48.220785    1212 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b820a54-d92f-459f-b351-ff53103865df" path="/var/lib/kubelet/pods/3b820a54-d92f-459f-b351-ff53103865df/volumes"
	Sep 24 18:33:48 addons-218885 kubelet[1212]: I0924 18:33:48.221487    1212 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d37fb54d-5671-4db7-8e00-385c2d490ff6" path="/var/lib/kubelet/pods/d37fb54d-5671-4db7-8e00-385c2d490ff6/volumes"
	Sep 24 18:33:49 addons-218885 kubelet[1212]: I0924 18:33:49.913517    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9fa50801-2baf-4242-9d1b-2b9f680d5498-webhook-cert\") pod \"9fa50801-2baf-4242-9d1b-2b9f680d5498\" (UID: \"9fa50801-2baf-4242-9d1b-2b9f680d5498\") "
	Sep 24 18:33:49 addons-218885 kubelet[1212]: I0924 18:33:49.913565    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j54z\" (UniqueName: \"kubernetes.io/projected/9fa50801-2baf-4242-9d1b-2b9f680d5498-kube-api-access-7j54z\") pod \"9fa50801-2baf-4242-9d1b-2b9f680d5498\" (UID: \"9fa50801-2baf-4242-9d1b-2b9f680d5498\") "
	Sep 24 18:33:49 addons-218885 kubelet[1212]: I0924 18:33:49.915637    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa50801-2baf-4242-9d1b-2b9f680d5498-kube-api-access-7j54z" (OuterVolumeSpecName: "kube-api-access-7j54z") pod "9fa50801-2baf-4242-9d1b-2b9f680d5498" (UID: "9fa50801-2baf-4242-9d1b-2b9f680d5498"). InnerVolumeSpecName "kube-api-access-7j54z". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 24 18:33:49 addons-218885 kubelet[1212]: I0924 18:33:49.916103    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa50801-2baf-4242-9d1b-2b9f680d5498-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "9fa50801-2baf-4242-9d1b-2b9f680d5498" (UID: "9fa50801-2baf-4242-9d1b-2b9f680d5498"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 24 18:33:50 addons-218885 kubelet[1212]: I0924 18:33:50.014626    1212 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9fa50801-2baf-4242-9d1b-2b9f680d5498-webhook-cert\") on node \"addons-218885\" DevicePath \"\""
	Sep 24 18:33:50 addons-218885 kubelet[1212]: I0924 18:33:50.014683    1212 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7j54z\" (UniqueName: \"kubernetes.io/projected/9fa50801-2baf-4242-9d1b-2b9f680d5498-kube-api-access-7j54z\") on node \"addons-218885\" DevicePath \"\""
	Sep 24 18:33:50 addons-218885 kubelet[1212]: I0924 18:33:50.054186    1212 scope.go:117] "RemoveContainer" containerID="34d8cf5c7f79b47a8d52423feea5cf5abae4283636ed3e0f86be76b2102c3756"
	Sep 24 18:33:50 addons-218885 kubelet[1212]: I0924 18:33:50.070450    1212 scope.go:117] "RemoveContainer" containerID="34d8cf5c7f79b47a8d52423feea5cf5abae4283636ed3e0f86be76b2102c3756"
	Sep 24 18:33:50 addons-218885 kubelet[1212]: E0924 18:33:50.071142    1212 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34d8cf5c7f79b47a8d52423feea5cf5abae4283636ed3e0f86be76b2102c3756\": container with ID starting with 34d8cf5c7f79b47a8d52423feea5cf5abae4283636ed3e0f86be76b2102c3756 not found: ID does not exist" containerID="34d8cf5c7f79b47a8d52423feea5cf5abae4283636ed3e0f86be76b2102c3756"
	Sep 24 18:33:50 addons-218885 kubelet[1212]: I0924 18:33:50.071197    1212 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34d8cf5c7f79b47a8d52423feea5cf5abae4283636ed3e0f86be76b2102c3756"} err="failed to get container status \"34d8cf5c7f79b47a8d52423feea5cf5abae4283636ed3e0f86be76b2102c3756\": rpc error: code = NotFound desc = could not find container \"34d8cf5c7f79b47a8d52423feea5cf5abae4283636ed3e0f86be76b2102c3756\": container with ID starting with 34d8cf5c7f79b47a8d52423feea5cf5abae4283636ed3e0f86be76b2102c3756 not found: ID does not exist"
	Sep 24 18:33:50 addons-218885 kubelet[1212]: I0924 18:33:50.222215    1212 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa50801-2baf-4242-9d1b-2b9f680d5498" path="/var/lib/kubelet/pods/9fa50801-2baf-4242-9d1b-2b9f680d5498/volumes"
	
	
	==> storage-provisioner [892df4e49ab85492657cc5d1c8404bc9bbdf9a850ff6a877c13f1f0bde448d34] <==
	I0924 18:21:07.075958       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 18:21:07.205644       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 18:21:07.205811       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 18:21:07.484816       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 18:21:07.489301       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-218885_6633e999-b40f-40f8-8839-f401b4cb474f!
	I0924 18:21:07.503782       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa2296f7-92f6-4a3d-97ef-5ea843d9a5be", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-218885_6633e999-b40f-40f8-8839-f401b4cb474f became leader
	I0924 18:21:07.594117       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-218885_6633e999-b40f-40f8-8839-f401b4cb474f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-218885 -n addons-218885
helpers_test.go:261: (dbg) Run:  kubectl --context addons-218885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-218885 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-218885 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-218885/192.168.39.215
	Start Time:       Tue, 24 Sep 2024 18:22:24 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z5n6g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-z5n6g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/busybox to addons-218885
	  Normal   Pulling    9m58s (x4 over 11m)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     9m58s (x4 over 11m)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     9m58s (x4 over 11m)  kubelet            Error: ErrImagePull
	  Warning  Failed     9m47s (x6 over 11m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    84s (x43 over 11m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (368.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.076045ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-pkzn4" [65ed5b0c-3307-4c48-b8dc-666848d353fc] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002819146s
addons_test.go:413: (dbg) Run:  kubectl --context addons-218885 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-218885 top pods -n kube-system: exit status 1 (74.368352ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wbgv9, age: 9m31.779321726s

                                                
                                                
** /stderr **
I0924 18:30:32.781066   10949 retry.go:31] will retry after 4.488057694s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-218885 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-218885 top pods -n kube-system: exit status 1 (71.599057ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wbgv9, age: 9m36.339946192s

                                                
                                                
** /stderr **
I0924 18:30:37.341899   10949 retry.go:31] will retry after 3.806962664s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-218885 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-218885 top pods -n kube-system: exit status 1 (62.475857ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wbgv9, age: 9m40.210420124s

                                                
                                                
** /stderr **
I0924 18:30:41.212179   10949 retry.go:31] will retry after 10.1022725s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-218885 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-218885 top pods -n kube-system: exit status 1 (223.236873ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wbgv9, age: 9m50.536732836s

                                                
                                                
** /stderr **
I0924 18:30:51.538773   10949 retry.go:31] will retry after 14.913627889s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-218885 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-218885 top pods -n kube-system: exit status 1 (81.624122ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wbgv9, age: 10m5.532943921s

                                                
                                                
** /stderr **
I0924 18:31:06.534353   10949 retry.go:31] will retry after 7.87436725s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-218885 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-218885 top pods -n kube-system: exit status 1 (62.735863ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wbgv9, age: 10m13.47064379s

                                                
                                                
** /stderr **
I0924 18:31:14.472226   10949 retry.go:31] will retry after 17.417206828s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-218885 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-218885 top pods -n kube-system: exit status 1 (61.580683ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wbgv9, age: 10m30.949891393s

                                                
                                                
** /stderr **
I0924 18:31:31.951774   10949 retry.go:31] will retry after 17.261170114s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-218885 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-218885 top pods -n kube-system: exit status 1 (68.093766ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wbgv9, age: 10m48.27997144s

                                                
                                                
** /stderr **
I0924 18:31:49.281811   10949 retry.go:31] will retry after 41.765830568s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-218885 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-218885 top pods -n kube-system: exit status 1 (60.380701ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wbgv9, age: 11m30.107138674s

                                                
                                                
** /stderr **
I0924 18:32:31.108835   10949 retry.go:31] will retry after 1m19.596635355s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-218885 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-218885 top pods -n kube-system: exit status 1 (62.720285ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wbgv9, age: 12m49.770327804s

                                                
                                                
** /stderr **
I0924 18:33:50.772061   10949 retry.go:31] will retry after 36.376957273s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-218885 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-218885 top pods -n kube-system: exit status 1 (59.309787ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wbgv9, age: 13m26.208344766s

                                                
                                                
** /stderr **
I0924 18:34:27.210408   10949 retry.go:31] will retry after 55.339115956s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-218885 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-218885 top pods -n kube-system: exit status 1 (60.099681ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wbgv9, age: 14m21.609709302s

                                                
                                                
** /stderr **
I0924 18:35:22.611603   10949 retry.go:31] will retry after 1m9.82878282s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-218885 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-218885 top pods -n kube-system: exit status 1 (58.947703ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wbgv9, age: 15m31.499077308s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-218885 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-218885 -n addons-218885
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-218885 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-218885 logs -n 25: (1.217231413s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-880989                                                                     | download-only-880989 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| delete  | -p download-only-366438                                                                     | download-only-366438 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| delete  | -p download-only-880989                                                                     | download-only-880989 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-303583 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | binary-mirror-303583                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40655                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-303583                                                                     | binary-mirror-303583 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| addons  | enable dashboard -p                                                                         | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | addons-218885                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | addons-218885                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-218885 --wait=true                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:22 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-218885 addons disable                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:30 UTC | 24 Sep 24 18:30 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:30 UTC | 24 Sep 24 18:30 UTC |
	|         | addons-218885                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-218885 ssh cat                                                                       | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | /opt/local-path-provisioner/pvc-32fb6863-7fde-481e-85f8-da616d5f9350_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-218885 addons disable                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-218885 addons                                                                        | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-218885 addons                                                                        | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | -p addons-218885                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-218885 ssh curl -s                                                                   | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-218885 ip                                                                            | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	| addons  | addons-218885 addons disable                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | addons-218885                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | -p addons-218885                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-218885 addons disable                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:31 UTC | 24 Sep 24 18:31 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-218885 ip                                                                            | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:33 UTC | 24 Sep 24 18:33 UTC |
	| addons  | addons-218885 addons disable                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:33 UTC | 24 Sep 24 18:33 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-218885 addons disable                                                                | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:33 UTC | 24 Sep 24 18:33 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-218885 addons                                                                        | addons-218885        | jenkins | v1.34.0 | 24 Sep 24 18:36 UTC | 24 Sep 24 18:36 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:20:12
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:20:12.325736   11602 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:20:12.325986   11602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:20:12.325997   11602 out.go:358] Setting ErrFile to fd 2...
	I0924 18:20:12.326003   11602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:20:12.326193   11602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 18:20:12.326790   11602 out.go:352] Setting JSON to false
	I0924 18:20:12.327640   11602 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":163,"bootTime":1727201849,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:20:12.327726   11602 start.go:139] virtualization: kvm guest
	I0924 18:20:12.329631   11602 out.go:177] * [addons-218885] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 18:20:12.331012   11602 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:20:12.331079   11602 notify.go:220] Checking for updates...
	I0924 18:20:12.333440   11602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:20:12.334628   11602 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:20:12.335823   11602 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:20:12.337065   11602 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 18:20:12.338153   11602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:20:12.339404   11602 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:20:12.370285   11602 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 18:20:12.371583   11602 start.go:297] selected driver: kvm2
	I0924 18:20:12.371597   11602 start.go:901] validating driver "kvm2" against <nil>
	I0924 18:20:12.371608   11602 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:20:12.372940   11602 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:20:12.373043   11602 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 18:20:12.393549   11602 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 18:20:12.393593   11602 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 18:20:12.393793   11602 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:20:12.393823   11602 cni.go:84] Creating CNI manager for ""
	I0924 18:20:12.393846   11602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 18:20:12.393854   11602 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 18:20:12.393894   11602 start.go:340] cluster config:
	{Name:addons-218885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-218885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:20:12.393973   11602 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:20:12.395768   11602 out.go:177] * Starting "addons-218885" primary control-plane node in "addons-218885" cluster
	I0924 18:20:12.396963   11602 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:20:12.396994   11602 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 18:20:12.397002   11602 cache.go:56] Caching tarball of preloaded images
	I0924 18:20:12.397076   11602 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 18:20:12.397086   11602 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 18:20:12.397361   11602 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/config.json ...
	I0924 18:20:12.397381   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/config.json: {Name:mk8ae020c4167ae6b07f3b581ad7b941f00493e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:12.397501   11602 start.go:360] acquireMachinesLock for addons-218885: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 18:20:12.397544   11602 start.go:364] duration metric: took 30.473µs to acquireMachinesLock for "addons-218885"
	I0924 18:20:12.397560   11602 start.go:93] Provisioning new machine with config: &{Name:addons-218885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-218885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:20:12.397621   11602 start.go:125] createHost starting for "" (driver="kvm2")
	I0924 18:20:12.399224   11602 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0924 18:20:12.399337   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:20:12.399361   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:20:12.413485   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I0924 18:20:12.413984   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:20:12.414522   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:20:12.414543   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:20:12.414994   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:20:12.415195   11602 main.go:141] libmachine: (addons-218885) Calling .GetMachineName
	I0924 18:20:12.415361   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:12.415550   11602 start.go:159] libmachine.API.Create for "addons-218885" (driver="kvm2")
	I0924 18:20:12.415574   11602 client.go:168] LocalClient.Create starting
	I0924 18:20:12.415623   11602 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem
	I0924 18:20:12.521230   11602 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem
	I0924 18:20:12.771341   11602 main.go:141] libmachine: Running pre-create checks...
	I0924 18:20:12.771362   11602 main.go:141] libmachine: (addons-218885) Calling .PreCreateCheck
	I0924 18:20:12.771809   11602 main.go:141] libmachine: (addons-218885) Calling .GetConfigRaw
	I0924 18:20:12.772210   11602 main.go:141] libmachine: Creating machine...
	I0924 18:20:12.772225   11602 main.go:141] libmachine: (addons-218885) Calling .Create
	I0924 18:20:12.772358   11602 main.go:141] libmachine: (addons-218885) Creating KVM machine...
	I0924 18:20:12.773495   11602 main.go:141] libmachine: (addons-218885) DBG | found existing default KVM network
	I0924 18:20:12.774264   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:12.774133   11624 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0924 18:20:12.774303   11602 main.go:141] libmachine: (addons-218885) DBG | created network xml: 
	I0924 18:20:12.774319   11602 main.go:141] libmachine: (addons-218885) DBG | <network>
	I0924 18:20:12.774325   11602 main.go:141] libmachine: (addons-218885) DBG |   <name>mk-addons-218885</name>
	I0924 18:20:12.774334   11602 main.go:141] libmachine: (addons-218885) DBG |   <dns enable='no'/>
	I0924 18:20:12.774360   11602 main.go:141] libmachine: (addons-218885) DBG |   
	I0924 18:20:12.774381   11602 main.go:141] libmachine: (addons-218885) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0924 18:20:12.774452   11602 main.go:141] libmachine: (addons-218885) DBG |     <dhcp>
	I0924 18:20:12.774493   11602 main.go:141] libmachine: (addons-218885) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0924 18:20:12.774513   11602 main.go:141] libmachine: (addons-218885) DBG |     </dhcp>
	I0924 18:20:12.774524   11602 main.go:141] libmachine: (addons-218885) DBG |   </ip>
	I0924 18:20:12.774536   11602 main.go:141] libmachine: (addons-218885) DBG |   
	I0924 18:20:12.774546   11602 main.go:141] libmachine: (addons-218885) DBG | </network>
	I0924 18:20:12.774569   11602 main.go:141] libmachine: (addons-218885) DBG | 
	I0924 18:20:12.779356   11602 main.go:141] libmachine: (addons-218885) DBG | trying to create private KVM network mk-addons-218885 192.168.39.0/24...
	I0924 18:20:12.840345   11602 main.go:141] libmachine: (addons-218885) DBG | private KVM network mk-addons-218885 192.168.39.0/24 created
	I0924 18:20:12.840381   11602 main.go:141] libmachine: (addons-218885) Setting up store path in /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885 ...
	I0924 18:20:12.840394   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:12.840325   11624 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:20:12.840402   11602 main.go:141] libmachine: (addons-218885) Building disk image from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 18:20:12.840503   11602 main.go:141] libmachine: (addons-218885) Downloading /home/jenkins/minikube-integration/19700-3751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 18:20:13.080883   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:13.080784   11624 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa...
	I0924 18:20:13.196783   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:13.196657   11624 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/addons-218885.rawdisk...
	I0924 18:20:13.196813   11602 main.go:141] libmachine: (addons-218885) DBG | Writing magic tar header
	I0924 18:20:13.196826   11602 main.go:141] libmachine: (addons-218885) DBG | Writing SSH key tar header
	I0924 18:20:13.196836   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:13.196759   11624 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885 ...
	I0924 18:20:13.196852   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885
	I0924 18:20:13.196869   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines
	I0924 18:20:13.196911   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885 (perms=drwx------)
	I0924 18:20:13.196926   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:20:13.196942   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751
	I0924 18:20:13.196954   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 18:20:13.196965   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home/jenkins
	I0924 18:20:13.196984   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines (perms=drwxr-xr-x)
	I0924 18:20:13.196995   11602 main.go:141] libmachine: (addons-218885) DBG | Checking permissions on dir: /home
	I0924 18:20:13.197007   11602 main.go:141] libmachine: (addons-218885) DBG | Skipping /home - not owner
	I0924 18:20:13.197025   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube (perms=drwxr-xr-x)
	I0924 18:20:13.197038   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751 (perms=drwxrwxr-x)
	I0924 18:20:13.197053   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 18:20:13.197070   11602 main.go:141] libmachine: (addons-218885) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 18:20:13.197083   11602 main.go:141] libmachine: (addons-218885) Creating domain...
	I0924 18:20:13.198004   11602 main.go:141] libmachine: (addons-218885) define libvirt domain using xml: 
	I0924 18:20:13.198029   11602 main.go:141] libmachine: (addons-218885) <domain type='kvm'>
	I0924 18:20:13.198041   11602 main.go:141] libmachine: (addons-218885)   <name>addons-218885</name>
	I0924 18:20:13.198049   11602 main.go:141] libmachine: (addons-218885)   <memory unit='MiB'>4000</memory>
	I0924 18:20:13.198059   11602 main.go:141] libmachine: (addons-218885)   <vcpu>2</vcpu>
	I0924 18:20:13.198066   11602 main.go:141] libmachine: (addons-218885)   <features>
	I0924 18:20:13.198071   11602 main.go:141] libmachine: (addons-218885)     <acpi/>
	I0924 18:20:13.198077   11602 main.go:141] libmachine: (addons-218885)     <apic/>
	I0924 18:20:13.198085   11602 main.go:141] libmachine: (addons-218885)     <pae/>
	I0924 18:20:13.198092   11602 main.go:141] libmachine: (addons-218885)     
	I0924 18:20:13.198097   11602 main.go:141] libmachine: (addons-218885)   </features>
	I0924 18:20:13.198104   11602 main.go:141] libmachine: (addons-218885)   <cpu mode='host-passthrough'>
	I0924 18:20:13.198109   11602 main.go:141] libmachine: (addons-218885)   
	I0924 18:20:13.198116   11602 main.go:141] libmachine: (addons-218885)   </cpu>
	I0924 18:20:13.198121   11602 main.go:141] libmachine: (addons-218885)   <os>
	I0924 18:20:13.198129   11602 main.go:141] libmachine: (addons-218885)     <type>hvm</type>
	I0924 18:20:13.198135   11602 main.go:141] libmachine: (addons-218885)     <boot dev='cdrom'/>
	I0924 18:20:13.198140   11602 main.go:141] libmachine: (addons-218885)     <boot dev='hd'/>
	I0924 18:20:13.198167   11602 main.go:141] libmachine: (addons-218885)     <bootmenu enable='no'/>
	I0924 18:20:13.198188   11602 main.go:141] libmachine: (addons-218885)   </os>
	I0924 18:20:13.198200   11602 main.go:141] libmachine: (addons-218885)   <devices>
	I0924 18:20:13.198211   11602 main.go:141] libmachine: (addons-218885)     <disk type='file' device='cdrom'>
	I0924 18:20:13.198226   11602 main.go:141] libmachine: (addons-218885)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/boot2docker.iso'/>
	I0924 18:20:13.198237   11602 main.go:141] libmachine: (addons-218885)       <target dev='hdc' bus='scsi'/>
	I0924 18:20:13.198247   11602 main.go:141] libmachine: (addons-218885)       <readonly/>
	I0924 18:20:13.198257   11602 main.go:141] libmachine: (addons-218885)     </disk>
	I0924 18:20:13.198267   11602 main.go:141] libmachine: (addons-218885)     <disk type='file' device='disk'>
	I0924 18:20:13.198282   11602 main.go:141] libmachine: (addons-218885)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 18:20:13.198296   11602 main.go:141] libmachine: (addons-218885)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/addons-218885.rawdisk'/>
	I0924 18:20:13.198308   11602 main.go:141] libmachine: (addons-218885)       <target dev='hda' bus='virtio'/>
	I0924 18:20:13.198316   11602 main.go:141] libmachine: (addons-218885)     </disk>
	I0924 18:20:13.198328   11602 main.go:141] libmachine: (addons-218885)     <interface type='network'>
	I0924 18:20:13.198339   11602 main.go:141] libmachine: (addons-218885)       <source network='mk-addons-218885'/>
	I0924 18:20:13.198352   11602 main.go:141] libmachine: (addons-218885)       <model type='virtio'/>
	I0924 18:20:13.198367   11602 main.go:141] libmachine: (addons-218885)     </interface>
	I0924 18:20:13.198380   11602 main.go:141] libmachine: (addons-218885)     <interface type='network'>
	I0924 18:20:13.198390   11602 main.go:141] libmachine: (addons-218885)       <source network='default'/>
	I0924 18:20:13.198398   11602 main.go:141] libmachine: (addons-218885)       <model type='virtio'/>
	I0924 18:20:13.198407   11602 main.go:141] libmachine: (addons-218885)     </interface>
	I0924 18:20:13.198418   11602 main.go:141] libmachine: (addons-218885)     <serial type='pty'>
	I0924 18:20:13.198427   11602 main.go:141] libmachine: (addons-218885)       <target port='0'/>
	I0924 18:20:13.198462   11602 main.go:141] libmachine: (addons-218885)     </serial>
	I0924 18:20:13.198485   11602 main.go:141] libmachine: (addons-218885)     <console type='pty'>
	I0924 18:20:13.198491   11602 main.go:141] libmachine: (addons-218885)       <target type='serial' port='0'/>
	I0924 18:20:13.198499   11602 main.go:141] libmachine: (addons-218885)     </console>
	I0924 18:20:13.198504   11602 main.go:141] libmachine: (addons-218885)     <rng model='virtio'>
	I0924 18:20:13.198513   11602 main.go:141] libmachine: (addons-218885)       <backend model='random'>/dev/random</backend>
	I0924 18:20:13.198518   11602 main.go:141] libmachine: (addons-218885)     </rng>
	I0924 18:20:13.198522   11602 main.go:141] libmachine: (addons-218885)     
	I0924 18:20:13.198527   11602 main.go:141] libmachine: (addons-218885)     
	I0924 18:20:13.198533   11602 main.go:141] libmachine: (addons-218885)   </devices>
	I0924 18:20:13.198538   11602 main.go:141] libmachine: (addons-218885) </domain>
	I0924 18:20:13.198542   11602 main.go:141] libmachine: (addons-218885) 
	I0924 18:20:13.204102   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:cf:a6:03 in network default
	I0924 18:20:13.204625   11602 main.go:141] libmachine: (addons-218885) Ensuring networks are active...
	I0924 18:20:13.204646   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:13.205345   11602 main.go:141] libmachine: (addons-218885) Ensuring network default is active
	I0924 18:20:13.205671   11602 main.go:141] libmachine: (addons-218885) Ensuring network mk-addons-218885 is active
	I0924 18:20:13.207039   11602 main.go:141] libmachine: (addons-218885) Getting domain xml...
	I0924 18:20:13.207785   11602 main.go:141] libmachine: (addons-218885) Creating domain...
	I0924 18:20:14.575302   11602 main.go:141] libmachine: (addons-218885) Waiting to get IP...
	I0924 18:20:14.575964   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:14.576313   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:14.576343   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:14.576303   11624 retry.go:31] will retry after 274.373447ms: waiting for machine to come up
	I0924 18:20:14.852639   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:14.852971   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:14.852999   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:14.852930   11624 retry.go:31] will retry after 320.247846ms: waiting for machine to come up
	I0924 18:20:15.174341   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:15.174769   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:15.174795   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:15.174721   11624 retry.go:31] will retry after 480.520038ms: waiting for machine to come up
	I0924 18:20:15.656403   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:15.656812   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:15.656838   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:15.656779   11624 retry.go:31] will retry after 445.239578ms: waiting for machine to come up
	I0924 18:20:16.103322   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:16.103649   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:16.103675   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:16.103614   11624 retry.go:31] will retry after 512.464509ms: waiting for machine to come up
	I0924 18:20:16.617221   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:16.617724   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:16.617760   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:16.617646   11624 retry.go:31] will retry after 857.414245ms: waiting for machine to come up
	I0924 18:20:17.477266   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:17.477652   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:17.477673   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:17.477626   11624 retry.go:31] will retry after 806.166754ms: waiting for machine to come up
	I0924 18:20:18.285640   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:18.286077   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:18.286100   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:18.286052   11624 retry.go:31] will retry after 1.16238491s: waiting for machine to come up
	I0924 18:20:19.450511   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:19.450884   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:19.450904   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:19.450866   11624 retry.go:31] will retry after 1.335718023s: waiting for machine to come up
	I0924 18:20:20.788441   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:20.788913   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:20.788943   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:20.788872   11624 retry.go:31] will retry after 1.799499594s: waiting for machine to come up
	I0924 18:20:22.589666   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:22.590013   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:22.590062   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:22.589996   11624 retry.go:31] will retry after 1.859729205s: waiting for machine to come up
	I0924 18:20:24.452908   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:24.453276   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:24.453302   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:24.453236   11624 retry.go:31] will retry after 2.767497543s: waiting for machine to come up
	I0924 18:20:27.223890   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:27.224340   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:27.224362   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:27.224297   11624 retry.go:31] will retry after 4.46492502s: waiting for machine to come up
	I0924 18:20:31.694510   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:31.694968   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find current IP address of domain addons-218885 in network mk-addons-218885
	I0924 18:20:31.694990   11602 main.go:141] libmachine: (addons-218885) DBG | I0924 18:20:31.694927   11624 retry.go:31] will retry after 4.457689137s: waiting for machine to come up
	I0924 18:20:36.156477   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:36.157022   11602 main.go:141] libmachine: (addons-218885) Found IP for machine: 192.168.39.215
	I0924 18:20:36.157042   11602 main.go:141] libmachine: (addons-218885) Reserving static IP address...
	I0924 18:20:36.157083   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has current primary IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:36.157396   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find host DHCP lease matching {name: "addons-218885", mac: "52:54:00:4f:2a:e2", ip: "192.168.39.215"} in network mk-addons-218885
	I0924 18:20:36.229161   11602 main.go:141] libmachine: (addons-218885) DBG | Getting to WaitForSSH function...
	I0924 18:20:36.229194   11602 main.go:141] libmachine: (addons-218885) Reserved static IP address: 192.168.39.215
	I0924 18:20:36.229207   11602 main.go:141] libmachine: (addons-218885) Waiting for SSH to be available...
	I0924 18:20:36.231373   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:36.231611   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885
	I0924 18:20:36.231644   11602 main.go:141] libmachine: (addons-218885) DBG | unable to find defined IP address of network mk-addons-218885 interface with MAC address 52:54:00:4f:2a:e2
	I0924 18:20:36.231777   11602 main.go:141] libmachine: (addons-218885) DBG | Using SSH client type: external
	I0924 18:20:36.231800   11602 main.go:141] libmachine: (addons-218885) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa (-rw-------)
	I0924 18:20:36.231882   11602 main.go:141] libmachine: (addons-218885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:20:36.231906   11602 main.go:141] libmachine: (addons-218885) DBG | About to run SSH command:
	I0924 18:20:36.231920   11602 main.go:141] libmachine: (addons-218885) DBG | exit 0
	I0924 18:20:36.243616   11602 main.go:141] libmachine: (addons-218885) DBG | SSH cmd err, output: exit status 255: 
	I0924 18:20:36.243646   11602 main.go:141] libmachine: (addons-218885) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0924 18:20:36.243654   11602 main.go:141] libmachine: (addons-218885) DBG | command : exit 0
	I0924 18:20:36.243658   11602 main.go:141] libmachine: (addons-218885) DBG | err     : exit status 255
	I0924 18:20:36.243667   11602 main.go:141] libmachine: (addons-218885) DBG | output  : 
	I0924 18:20:39.245429   11602 main.go:141] libmachine: (addons-218885) DBG | Getting to WaitForSSH function...
	I0924 18:20:39.247941   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.248310   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.248361   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.248472   11602 main.go:141] libmachine: (addons-218885) DBG | Using SSH client type: external
	I0924 18:20:39.248497   11602 main.go:141] libmachine: (addons-218885) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa (-rw-------)
	I0924 18:20:39.248544   11602 main.go:141] libmachine: (addons-218885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:20:39.248581   11602 main.go:141] libmachine: (addons-218885) DBG | About to run SSH command:
	I0924 18:20:39.248599   11602 main.go:141] libmachine: (addons-218885) DBG | exit 0
	I0924 18:20:39.370720   11602 main.go:141] libmachine: (addons-218885) DBG | SSH cmd err, output: <nil>: 
	I0924 18:20:39.371024   11602 main.go:141] libmachine: (addons-218885) KVM machine creation complete!
	I0924 18:20:39.371383   11602 main.go:141] libmachine: (addons-218885) Calling .GetConfigRaw
	I0924 18:20:39.371926   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:39.372115   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:39.372292   11602 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 18:20:39.372308   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:20:39.373716   11602 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 18:20:39.373728   11602 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 18:20:39.373737   11602 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 18:20:39.373742   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:39.375983   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.376314   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.376342   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.376467   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:39.376746   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.376896   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.377041   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:39.377176   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:39.377355   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:39.377366   11602 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 18:20:39.474162   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:20:39.474185   11602 main.go:141] libmachine: Detecting the provisioner...
	I0924 18:20:39.474192   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:39.476622   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.477004   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.477030   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.477220   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:39.477426   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.477578   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.477699   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:39.477853   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:39.478018   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:39.478028   11602 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 18:20:39.575513   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 18:20:39.575630   11602 main.go:141] libmachine: found compatible host: buildroot
	I0924 18:20:39.575647   11602 main.go:141] libmachine: Provisioning with buildroot...
	I0924 18:20:39.575659   11602 main.go:141] libmachine: (addons-218885) Calling .GetMachineName
	I0924 18:20:39.575913   11602 buildroot.go:166] provisioning hostname "addons-218885"
	I0924 18:20:39.575936   11602 main.go:141] libmachine: (addons-218885) Calling .GetMachineName
	I0924 18:20:39.576144   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:39.578676   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.579102   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.579128   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.579285   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:39.579467   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.579584   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.579717   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:39.579893   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:39.580094   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:39.580111   11602 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-218885 && echo "addons-218885" | sudo tee /etc/hostname
	I0924 18:20:39.692677   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-218885
	
	I0924 18:20:39.692711   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:39.695685   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.696027   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.696057   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.696220   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:39.696411   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.696598   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:39.696757   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:39.696917   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:39.697115   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:39.697138   11602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-218885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-218885/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-218885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 18:20:39.803035   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:20:39.803068   11602 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 18:20:39.803143   11602 buildroot.go:174] setting up certificates
	I0924 18:20:39.803160   11602 provision.go:84] configureAuth start
	I0924 18:20:39.803180   11602 main.go:141] libmachine: (addons-218885) Calling .GetMachineName
	I0924 18:20:39.803472   11602 main.go:141] libmachine: (addons-218885) Calling .GetIP
	I0924 18:20:39.806086   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.806371   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.806397   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.806540   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:39.808868   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.809212   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:39.809237   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:39.809404   11602 provision.go:143] copyHostCerts
	I0924 18:20:39.809469   11602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 18:20:39.809588   11602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 18:20:39.809648   11602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 18:20:39.809697   11602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.addons-218885 san=[127.0.0.1 192.168.39.215 addons-218885 localhost minikube]
	I0924 18:20:40.082244   11602 provision.go:177] copyRemoteCerts
	I0924 18:20:40.082308   11602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 18:20:40.082332   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.085171   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.085563   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.085591   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.085797   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.085983   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.086103   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.086224   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:20:40.165135   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 18:20:40.192252   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 18:20:40.219501   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 18:20:40.246264   11602 provision.go:87] duration metric: took 443.085344ms to configureAuth
	I0924 18:20:40.246293   11602 buildroot.go:189] setting minikube options for container-runtime
	I0924 18:20:40.246484   11602 config.go:182] Loaded profile config "addons-218885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:20:40.246570   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.249244   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.249629   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.249653   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.249818   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.250018   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.250175   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.250308   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.250488   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:40.250644   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:40.250658   11602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 18:20:40.468815   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 18:20:40.468854   11602 main.go:141] libmachine: Checking connection to Docker...
	I0924 18:20:40.468866   11602 main.go:141] libmachine: (addons-218885) Calling .GetURL
	I0924 18:20:40.470093   11602 main.go:141] libmachine: (addons-218885) DBG | Using libvirt version 6000000
	I0924 18:20:40.472092   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.472382   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.472406   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.472571   11602 main.go:141] libmachine: Docker is up and running!
	I0924 18:20:40.472589   11602 main.go:141] libmachine: Reticulating splines...
	I0924 18:20:40.472597   11602 client.go:171] duration metric: took 28.057014034s to LocalClient.Create
	I0924 18:20:40.472624   11602 start.go:167] duration metric: took 28.057073554s to libmachine.API.Create "addons-218885"
	I0924 18:20:40.472634   11602 start.go:293] postStartSetup for "addons-218885" (driver="kvm2")
	I0924 18:20:40.472648   11602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:20:40.472666   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:40.472877   11602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:20:40.472906   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.475196   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.475548   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.475575   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.475695   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.475855   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.476016   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.476154   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:20:40.552548   11602 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 18:20:40.556457   11602 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 18:20:40.556481   11602 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 18:20:40.556558   11602 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 18:20:40.556592   11602 start.go:296] duration metric: took 83.950837ms for postStartSetup
	I0924 18:20:40.556636   11602 main.go:141] libmachine: (addons-218885) Calling .GetConfigRaw
	I0924 18:20:40.557160   11602 main.go:141] libmachine: (addons-218885) Calling .GetIP
	I0924 18:20:40.559791   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.560070   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.560094   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.560299   11602 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/config.json ...
	I0924 18:20:40.560458   11602 start.go:128] duration metric: took 28.162828516s to createHost
	I0924 18:20:40.560481   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.562477   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.562977   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.563007   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.563174   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.563321   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.563475   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.563572   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.563723   11602 main.go:141] libmachine: Using SSH client type: native
	I0924 18:20:40.563885   11602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0924 18:20:40.563895   11602 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 18:20:40.659437   11602 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727202040.641796120
	
	I0924 18:20:40.659459   11602 fix.go:216] guest clock: 1727202040.641796120
	I0924 18:20:40.659466   11602 fix.go:229] Guest: 2024-09-24 18:20:40.64179612 +0000 UTC Remote: 2024-09-24 18:20:40.560467466 +0000 UTC m=+28.266972018 (delta=81.328654ms)
	I0924 18:20:40.659526   11602 fix.go:200] guest clock delta is within tolerance: 81.328654ms
	I0924 18:20:40.659536   11602 start.go:83] releasing machines lock for "addons-218885", held for 28.261982282s
	I0924 18:20:40.659570   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:40.659802   11602 main.go:141] libmachine: (addons-218885) Calling .GetIP
	I0924 18:20:40.662293   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.662595   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.662623   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.662765   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:40.663205   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:40.663369   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:20:40.663431   11602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 18:20:40.663474   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.663578   11602 ssh_runner.go:195] Run: cat /version.json
	I0924 18:20:40.663600   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:20:40.666017   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.666043   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.666366   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.666401   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.666427   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:40.666442   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:40.666568   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.666579   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:20:40.666726   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.666735   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:20:40.666891   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.666925   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:20:40.667053   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:20:40.667063   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:20:40.762590   11602 ssh_runner.go:195] Run: systemctl --version
	I0924 18:20:40.768558   11602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 18:20:40.923618   11602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 18:20:40.929415   11602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 18:20:40.929483   11602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:20:40.944982   11602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 18:20:40.945009   11602 start.go:495] detecting cgroup driver to use...
	I0924 18:20:40.945091   11602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 18:20:40.960695   11602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 18:20:40.974660   11602 docker.go:217] disabling cri-docker service (if available) ...
	I0924 18:20:40.974712   11602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 18:20:40.988081   11602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 18:20:41.001845   11602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 18:20:41.116471   11602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 18:20:41.278206   11602 docker.go:233] disabling docker service ...
	I0924 18:20:41.278282   11602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 18:20:41.292340   11602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 18:20:41.304936   11602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 18:20:41.427259   11602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 18:20:41.556695   11602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 18:20:41.569928   11602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:20:41.587343   11602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 18:20:41.587395   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.597357   11602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 18:20:41.597420   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.607453   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.617617   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.627570   11602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:20:41.637701   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.647609   11602 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.663924   11602 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:20:41.674020   11602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:20:41.683135   11602 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 18:20:41.683188   11602 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 18:20:41.696102   11602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:20:41.705462   11602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:20:41.823495   11602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 18:20:41.913369   11602 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 18:20:41.913456   11602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 18:20:41.918292   11602 start.go:563] Will wait 60s for crictl version
	I0924 18:20:41.918361   11602 ssh_runner.go:195] Run: which crictl
	I0924 18:20:41.921901   11602 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 18:20:41.958038   11602 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 18:20:41.958153   11602 ssh_runner.go:195] Run: crio --version
	I0924 18:20:41.985269   11602 ssh_runner.go:195] Run: crio --version
	I0924 18:20:42.014805   11602 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 18:20:42.016093   11602 main.go:141] libmachine: (addons-218885) Calling .GetIP
	I0924 18:20:42.018614   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:42.019098   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:20:42.019139   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:20:42.019258   11602 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 18:20:42.022974   11602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:20:42.034408   11602 kubeadm.go:883] updating cluster {Name:addons-218885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-218885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 18:20:42.034513   11602 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:20:42.034569   11602 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:20:42.064250   11602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 18:20:42.064317   11602 ssh_runner.go:195] Run: which lz4
	I0924 18:20:42.068235   11602 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 18:20:42.072127   11602 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 18:20:42.072165   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 18:20:43.181256   11602 crio.go:462] duration metric: took 1.11306138s to copy over tarball
	I0924 18:20:43.181321   11602 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 18:20:45.254978   11602 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.073631711s)
	I0924 18:20:45.255003   11602 crio.go:469] duration metric: took 2.07372259s to extract the tarball
	I0924 18:20:45.255011   11602 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 18:20:45.291605   11602 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:20:45.334151   11602 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 18:20:45.334171   11602 cache_images.go:84] Images are preloaded, skipping loading
	I0924 18:20:45.334179   11602 kubeadm.go:934] updating node { 192.168.39.215 8443 v1.31.1 crio true true} ...
	I0924 18:20:45.334266   11602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-218885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-218885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 18:20:45.334326   11602 ssh_runner.go:195] Run: crio config
	I0924 18:20:45.379706   11602 cni.go:84] Creating CNI manager for ""
	I0924 18:20:45.379729   11602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 18:20:45.379738   11602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 18:20:45.379759   11602 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-218885 NodeName:addons-218885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 18:20:45.379870   11602 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-218885"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 18:20:45.379931   11602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:20:45.389532   11602 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 18:20:45.389607   11602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 18:20:45.398734   11602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0924 18:20:45.414812   11602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:20:45.430737   11602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0924 18:20:45.447185   11602 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I0924 18:20:45.451002   11602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:20:45.463061   11602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:20:45.578185   11602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:20:45.595455   11602 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885 for IP: 192.168.39.215
	I0924 18:20:45.595478   11602 certs.go:194] generating shared ca certs ...
	I0924 18:20:45.595493   11602 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:45.595628   11602 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 18:20:45.693821   11602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt ...
	I0924 18:20:45.693849   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt: {Name:mk739c8ca5d31150a754381b18341274a55f3194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:45.694000   11602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key ...
	I0924 18:20:45.694011   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key: {Name:mk41697d54972101e4b583bdb12adb625c8a2ce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:45.694084   11602 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 18:20:45.949465   11602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt ...
	I0924 18:20:45.949495   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt: {Name:mk6c99d30fd3bd72ef67c33fc7a8ad8032d9e547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:45.949649   11602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key ...
	I0924 18:20:45.949659   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key: {Name:mk4a9ced92c9b128cb0109242c1c85bc6095111a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:45.949724   11602 certs.go:256] generating profile certs ...
	I0924 18:20:45.949773   11602 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.key
	I0924 18:20:45.949788   11602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt with IP's: []
	I0924 18:20:46.111748   11602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt ...
	I0924 18:20:46.111780   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: {Name:mkcda67505a1d19822a9bd6aa070be1298e2b766 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.111931   11602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.key ...
	I0924 18:20:46.111941   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.key: {Name:mk7ff22fb920d31c4caef16f50e62ca111cf8f23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.112006   11602 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key.5418caf9
	I0924 18:20:46.112025   11602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt.5418caf9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215]
	I0924 18:20:46.368887   11602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt.5418caf9 ...
	I0924 18:20:46.368928   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt.5418caf9: {Name:mk3ea14ef69c0bf68f59451ed6ddde96239c0b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.369111   11602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key.5418caf9 ...
	I0924 18:20:46.369127   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key.5418caf9: {Name:mk094871a112eec146c05c29dae97b6b80490a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.369227   11602 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt.5418caf9 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt
	I0924 18:20:46.369341   11602 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key.5418caf9 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key
	I0924 18:20:46.369416   11602 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.key
	I0924 18:20:46.369442   11602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.crt with IP's: []
	I0924 18:20:46.475111   11602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.crt ...
	I0924 18:20:46.475146   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.crt: {Name:mk14e8d60731076f4aeed39447637ad04acbd93f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.475328   11602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.key ...
	I0924 18:20:46.475341   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.key: {Name:mk1261b7340504044d617837647a0294e6e60c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.475529   11602 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 18:20:46.475574   11602 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 18:20:46.475609   11602 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:20:46.475644   11602 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 18:20:46.476210   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:20:46.510341   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 18:20:46.534245   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:20:46.573657   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 18:20:46.597284   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0924 18:20:46.619923   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 18:20:46.643112   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:20:46.666301   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 18:20:46.689259   11602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:20:46.712125   11602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 18:20:46.728579   11602 ssh_runner.go:195] Run: openssl version
	I0924 18:20:46.734238   11602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:20:46.744739   11602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:46.749263   11602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:46.749321   11602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:46.755061   11602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:20:46.765777   11602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:20:46.770113   11602 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 18:20:46.770173   11602 kubeadm.go:392] StartCluster: {Name:addons-218885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-218885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:20:46.770261   11602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 18:20:46.770309   11602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 18:20:46.805114   11602 cri.go:89] found id: ""
	I0924 18:20:46.805195   11602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 18:20:46.816665   11602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 18:20:46.826242   11602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 18:20:46.835662   11602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 18:20:46.835682   11602 kubeadm.go:157] found existing configuration files:
	
	I0924 18:20:46.835732   11602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 18:20:46.844574   11602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 18:20:46.844639   11602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 18:20:46.853707   11602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 18:20:46.862302   11602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 18:20:46.862358   11602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 18:20:46.871498   11602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 18:20:46.880100   11602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 18:20:46.880165   11602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 18:20:46.889113   11602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 18:20:46.898369   11602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 18:20:46.898428   11602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 18:20:46.907411   11602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 18:20:46.952940   11602 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 18:20:46.953015   11602 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 18:20:47.040390   11602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 18:20:47.040491   11602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 18:20:47.040607   11602 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 18:20:47.049167   11602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 18:20:47.050888   11602 out.go:235]   - Generating certificates and keys ...
	I0924 18:20:47.050961   11602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 18:20:47.051052   11602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 18:20:47.131678   11602 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 18:20:47.547895   11602 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 18:20:47.601285   11602 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 18:20:47.832128   11602 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 18:20:48.031950   11602 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 18:20:48.032124   11602 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-218885 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0924 18:20:48.210630   11602 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 18:20:48.210816   11602 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-218885 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0924 18:20:48.300960   11602 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 18:20:48.605685   11602 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 18:20:48.809001   11602 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 18:20:48.809097   11602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 18:20:49.163476   11602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 18:20:49.371134   11602 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 18:20:49.529427   11602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 18:20:49.721235   11602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 18:20:49.836924   11602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 18:20:49.837300   11602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 18:20:49.839677   11602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 18:20:49.841378   11602 out.go:235]   - Booting up control plane ...
	I0924 18:20:49.841496   11602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 18:20:49.841559   11602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 18:20:49.841618   11602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 18:20:49.858387   11602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 18:20:49.866657   11602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 18:20:49.866723   11602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 18:20:49.987294   11602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 18:20:49.987476   11602 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 18:20:50.488576   11602 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.853577ms
	I0924 18:20:50.488656   11602 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 18:20:55.489419   11602 kubeadm.go:310] [api-check] The API server is healthy after 5.002843483s
	I0924 18:20:55.501919   11602 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 18:20:55.515354   11602 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 18:20:55.545511   11602 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 18:20:55.545740   11602 kubeadm.go:310] [mark-control-plane] Marking the node addons-218885 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 18:20:55.558654   11602 kubeadm.go:310] [bootstrap-token] Using token: wfmddn.jqm9ftj1c9z5a6vs
	I0924 18:20:55.560273   11602 out.go:235]   - Configuring RBAC rules ...
	I0924 18:20:55.560435   11602 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 18:20:55.568873   11602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 18:20:55.578532   11602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 18:20:55.582388   11602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 18:20:55.586382   11602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 18:20:55.593349   11602 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 18:20:55.897630   11602 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 18:20:56.326166   11602 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 18:20:56.895415   11602 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 18:20:56.896193   11602 kubeadm.go:310] 
	I0924 18:20:56.896289   11602 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 18:20:56.896301   11602 kubeadm.go:310] 
	I0924 18:20:56.896422   11602 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 18:20:56.896443   11602 kubeadm.go:310] 
	I0924 18:20:56.896479   11602 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 18:20:56.896571   11602 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 18:20:56.896662   11602 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 18:20:56.896677   11602 kubeadm.go:310] 
	I0924 18:20:56.896760   11602 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 18:20:56.896768   11602 kubeadm.go:310] 
	I0924 18:20:56.896837   11602 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 18:20:56.896846   11602 kubeadm.go:310] 
	I0924 18:20:56.896915   11602 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 18:20:56.897013   11602 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 18:20:56.897102   11602 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 18:20:56.897113   11602 kubeadm.go:310] 
	I0924 18:20:56.897214   11602 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 18:20:56.897334   11602 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 18:20:56.897344   11602 kubeadm.go:310] 
	I0924 18:20:56.897455   11602 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wfmddn.jqm9ftj1c9z5a6vs \
	I0924 18:20:56.897590   11602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 18:20:56.897626   11602 kubeadm.go:310] 	--control-plane 
	I0924 18:20:56.897639   11602 kubeadm.go:310] 
	I0924 18:20:56.897747   11602 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 18:20:56.897756   11602 kubeadm.go:310] 
	I0924 18:20:56.897876   11602 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wfmddn.jqm9ftj1c9z5a6vs \
	I0924 18:20:56.898032   11602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 18:20:56.898926   11602 kubeadm.go:310] W0924 18:20:46.938376     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:20:56.899246   11602 kubeadm.go:310] W0924 18:20:46.939040     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:20:56.899401   11602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 18:20:56.899428   11602 cni.go:84] Creating CNI manager for ""
	I0924 18:20:56.899438   11602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 18:20:56.901322   11602 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 18:20:56.902863   11602 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 18:20:56.914363   11602 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 18:20:56.930973   11602 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 18:20:56.931114   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:56.931143   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-218885 minikube.k8s.io/updated_at=2024_09_24T18_20_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=addons-218885 minikube.k8s.io/primary=true
	I0924 18:20:57.076312   11602 ops.go:34] apiserver oom_adj: -16
	I0924 18:20:57.076379   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:57.576425   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:58.077347   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:58.577119   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:59.076927   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:59.577230   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:00.077137   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:00.577008   11602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:21:00.658298   11602 kubeadm.go:1113] duration metric: took 3.727240888s to wait for elevateKubeSystemPrivileges
	I0924 18:21:00.658328   11602 kubeadm.go:394] duration metric: took 13.888161582s to StartCluster
	I0924 18:21:00.658352   11602 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:21:00.658482   11602 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:21:00.658929   11602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:21:00.659138   11602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0924 18:21:00.659158   11602 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:21:00.659219   11602 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0924 18:21:00.659336   11602 addons.go:69] Setting yakd=true in profile "addons-218885"
	I0924 18:21:00.659349   11602 addons.go:69] Setting inspektor-gadget=true in profile "addons-218885"
	I0924 18:21:00.659352   11602 addons.go:69] Setting default-storageclass=true in profile "addons-218885"
	I0924 18:21:00.659366   11602 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-218885"
	I0924 18:21:00.659371   11602 addons.go:69] Setting volcano=true in profile "addons-218885"
	I0924 18:21:00.659357   11602 addons.go:234] Setting addon yakd=true in "addons-218885"
	I0924 18:21:00.659381   11602 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-218885"
	I0924 18:21:00.659390   11602 addons.go:69] Setting volumesnapshots=true in profile "addons-218885"
	I0924 18:21:00.659393   11602 addons.go:69] Setting ingress=true in profile "addons-218885"
	I0924 18:21:00.659399   11602 addons.go:234] Setting addon volumesnapshots=true in "addons-218885"
	I0924 18:21:00.659414   11602 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-218885"
	I0924 18:21:00.659374   11602 addons.go:234] Setting addon inspektor-gadget=true in "addons-218885"
	I0924 18:21:00.659424   11602 addons.go:234] Setting addon ingress=true in "addons-218885"
	I0924 18:21:00.659424   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659447   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659367   11602 addons.go:69] Setting storage-provisioner=true in profile "addons-218885"
	I0924 18:21:00.659550   11602 addons.go:234] Setting addon storage-provisioner=true in "addons-218885"
	I0924 18:21:00.659573   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659418   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659383   11602 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-218885"
	I0924 18:21:00.659842   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.659864   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.659875   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659887   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659936   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.659968   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659842   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.659993   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659385   11602 addons.go:234] Setting addon volcano=true in "addons-218885"
	I0924 18:21:00.659395   11602 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-218885"
	I0924 18:21:00.660031   11602 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-218885"
	I0924 18:21:00.659449   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.660131   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.660177   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660206   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.660213   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.660215   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660246   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659454   11602 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-218885"
	I0924 18:21:00.659458   11602 addons.go:69] Setting gcp-auth=true in profile "addons-218885"
	I0924 18:21:00.660330   11602 mustload.go:65] Loading cluster: addons-218885
	I0924 18:21:00.660373   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660401   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.660467   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660487   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659438   11602 addons.go:69] Setting registry=true in profile "addons-218885"
	I0924 18:21:00.660541   11602 config.go:182] Loaded profile config "addons-218885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:21:00.660588   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660542   11602 addons.go:234] Setting addon registry=true in "addons-218885"
	I0924 18:21:00.660620   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659357   11602 config.go:182] Loaded profile config "addons-218885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:21:00.660726   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659470   11602 addons.go:69] Setting ingress-dns=true in profile "addons-218885"
	I0924 18:21:00.660749   11602 addons.go:234] Setting addon ingress-dns=true in "addons-218885"
	I0924 18:21:00.660774   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.660816   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.660882   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.660899   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.661056   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.661141   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.661204   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.661240   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.661266   11602 out.go:177] * Verifying Kubernetes components...
	I0924 18:21:00.661080   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.661384   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.659461   11602 addons.go:69] Setting cloud-spanner=true in profile "addons-218885"
	I0924 18:21:00.661444   11602 addons.go:234] Setting addon cloud-spanner=true in "addons-218885"
	I0924 18:21:00.661469   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.659384   11602 addons.go:69] Setting metrics-server=true in profile "addons-218885"
	I0924 18:21:00.661619   11602 addons.go:234] Setting addon metrics-server=true in "addons-218885"
	I0924 18:21:00.661644   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.661822   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.661841   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.661979   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.662002   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.672130   11602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:21:00.680735   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43731
	I0924 18:21:00.680735   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0924 18:21:00.681044   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34977
	I0924 18:21:00.681236   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.681465   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0924 18:21:00.681785   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.681838   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.681788   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.682083   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.682102   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.682225   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.682240   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.682295   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0924 18:21:00.682410   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.682419   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.682537   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.682552   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.682600   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.682643   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.682683   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.682749   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.691487   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.691518   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.691625   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I0924 18:21:00.691743   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.691812   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35801
	I0924 18:21:00.691839   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.691926   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.691968   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.692170   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.692210   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.692229   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.692243   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.692638   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.692695   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.692721   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.693073   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.693157   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.693172   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.693195   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.693371   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.693596   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.693635   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.693926   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.694150   11602 addons.go:234] Setting addon default-storageclass=true in "addons-218885"
	I0924 18:21:00.694198   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.694456   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.694483   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.694546   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.694577   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.695678   11602 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-218885"
	I0924 18:21:00.695724   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.696084   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.696123   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.699951   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:00.700319   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.700355   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.713968   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0924 18:21:00.714463   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.715097   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.715118   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.715521   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.715582   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0924 18:21:00.716260   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.716297   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.716505   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.724819   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38155
	I0924 18:21:00.725028   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0924 18:21:00.725630   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.726076   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.726173   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.726195   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.726596   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.727232   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.727266   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.727423   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0924 18:21:00.728015   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.728034   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.728196   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
	I0924 18:21:00.728690   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.728703   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.728762   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.729325   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.729349   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.729621   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.729633   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.729639   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.729653   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.730009   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.730051   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.730921   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.731302   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.731334   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.732823   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.733011   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:00.733030   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:00.734792   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:00.734795   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:00.734814   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:00.734823   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:00.734840   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:00.735052   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:00.735064   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	W0924 18:21:00.735144   11602 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0924 18:21:00.748351   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I0924 18:21:00.750720   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
	I0924 18:21:00.750728   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I0924 18:21:00.751162   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.751247   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.751319   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.751440   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.751456   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.751567   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.751585   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.756724   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35171
	I0924 18:21:00.756730   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.756778   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40953
	I0924 18:21:00.756725   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0924 18:21:00.756847   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.756861   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.756930   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.757362   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.757369   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.757379   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.757891   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.757905   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.757921   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.757933   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.757908   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.758358   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.758456   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.758697   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.758697   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.759435   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39903
	I0924 18:21:00.759442   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.759636   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.759920   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.760023   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.760374   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.760387   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.760408   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.760480   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.760610   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.761142   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.761179   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.761488   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.761503   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.761847   11602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0924 18:21:00.761975   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.762202   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.763064   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0924 18:21:00.763905   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.764201   11602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 18:21:00.764244   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.764687   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.764830   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.764843   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.765214   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.765520   11602 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0924 18:21:00.765754   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.765882   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.766152   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0924 18:21:00.766854   11602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 18:21:00.766965   11602 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 18:21:00.767251   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0924 18:21:00.767271   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.767695   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0924 18:21:00.767715   11602 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0924 18:21:00.767749   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.768686   11602 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0924 18:21:00.768698   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0924 18:21:00.768713   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.770065   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39951
	I0924 18:21:00.770538   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.771457   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.771477   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.771872   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.772426   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.772458   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.773088   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34187
	I0924 18:21:00.774506   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.774988   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.775391   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.775411   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.775446   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39697
	I0924 18:21:00.775557   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.775742   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43801
	I0924 18:21:00.775762   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.776043   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.776070   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.776313   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0924 18:21:00.776431   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.776447   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.776497   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.776748   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.776767   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.776798   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.776829   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.777190   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.777241   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.777281   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.777317   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.777820   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.777981   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.778090   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.778249   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.778261   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.778415   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.778483   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.778799   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37345
	I0924 18:21:00.779313   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.779385   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0924 18:21:00.779922   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.779987   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45887
	I0924 18:21:00.780000   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.780014   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.780328   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.781720   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0924 18:21:00.781840   11602 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0924 18:21:00.783389   11602 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0924 18:21:00.783406   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0924 18:21:00.783408   11602 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0924 18:21:00.783426   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.785670   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0924 18:21:00.786392   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.786875   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.786904   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.787147   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.787290   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.787460   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.787571   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.787929   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0924 18:21:00.789553   11602 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0924 18:21:00.789818   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0924 18:21:00.790777   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0924 18:21:00.790798   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0924 18:21:00.790817   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.791841   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38635
	I0924 18:21:00.793491   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.793863   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.793884   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.794037   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.794196   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.794343   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.794479   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.795306   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.795325   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.795413   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.795716   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.795878   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.795893   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.795928   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.795965   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.796083   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.796101   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.796213   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.796228   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.796239   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.796382   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.796422   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.796444   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.796634   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.796692   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.797108   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.797124   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.797174   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.797214   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.797254   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.797672   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:00.797708   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:00.797893   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.797947   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.798167   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.799160   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36489
	I0924 18:21:00.799285   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.799329   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.799809   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.800183   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.800664   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.800835   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.800844   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.801181   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.801262   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.801710   11602 out.go:177]   - Using image docker.io/registry:2.8.3
	I0924 18:21:00.801722   11602 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 18:21:00.801827   11602 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0924 18:21:00.802746   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.802972   11602 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0924 18:21:00.803140   11602 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:21:00.803158   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 18:21:00.803175   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.803332   11602 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0924 18:21:00.803346   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0924 18:21:00.803360   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.804116   11602 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0924 18:21:00.804171   11602 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0924 18:21:00.804317   11602 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0924 18:21:00.804328   11602 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0924 18:21:00.804343   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.806039   11602 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0924 18:21:00.806052   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0924 18:21:00.806068   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.807823   11602 out.go:177]   - Using image docker.io/busybox:stable
	I0924 18:21:00.807997   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.808507   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.808913   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.808939   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.809198   11602 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 18:21:00.809214   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0924 18:21:00.809230   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.809866   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.809901   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.809952   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.809996   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.810009   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.810036   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.810052   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.810069   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.810710   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.810758   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.810762   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.810798   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.810928   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.810938   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.810973   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.811072   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.811124   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.811175   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.811575   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.811599   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.811747   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.811961   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.812105   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.812231   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.813492   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.813801   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.813819   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.813949   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.814102   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.814242   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.814374   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.819089   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0924 18:21:00.819472   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.819662   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36757
	I0924 18:21:00.819981   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.819993   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.820026   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.820352   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.820499   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.820570   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.820585   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.820921   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.821036   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.822394   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.822536   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.822579   11602 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 18:21:00.822590   11602 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 18:21:00.822614   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.824394   11602 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0924 18:21:00.825222   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.825626   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.825642   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.825660   11602 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 18:21:00.825679   11602 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 18:21:00.825698   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.825895   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.826045   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.826169   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.826315   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.828341   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.828768   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.828797   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.828911   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.829107   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.829220   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.829309   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.833381   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38481
	I0924 18:21:00.833708   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:00.834195   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:00.834214   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:00.834741   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:00.834967   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:00.836909   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:00.838863   11602 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0924 18:21:00.840172   11602 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0924 18:21:00.840190   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0924 18:21:00.840204   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:00.843461   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.843939   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:00.843964   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:00.844120   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:00.844264   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:00.844395   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:00.844488   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:00.925784   11602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0924 18:21:00.967714   11602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:21:01.124083   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0924 18:21:01.139520   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0924 18:21:01.209659   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0924 18:21:01.209681   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0924 18:21:01.211490   11602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0924 18:21:01.211509   11602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0924 18:21:01.230706   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 18:21:01.259266   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 18:21:01.265419   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0924 18:21:01.265444   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0924 18:21:01.267525   11602 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0924 18:21:01.267542   11602 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0924 18:21:01.270870   11602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 18:21:01.270886   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0924 18:21:01.294065   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:21:01.302436   11602 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0924 18:21:01.302464   11602 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0924 18:21:01.303436   11602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0924 18:21:01.303457   11602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0924 18:21:01.336902   11602 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0924 18:21:01.336926   11602 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0924 18:21:01.390129   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 18:21:01.405905   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0924 18:21:01.443401   11602 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0924 18:21:01.443421   11602 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0924 18:21:01.460206   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0924 18:21:01.460233   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0924 18:21:01.489629   11602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 18:21:01.489659   11602 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 18:21:01.516924   11602 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0924 18:21:01.516952   11602 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0924 18:21:01.527602   11602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0924 18:21:01.527630   11602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0924 18:21:01.530327   11602 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0924 18:21:01.530344   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0924 18:21:01.544683   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0924 18:21:01.544711   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0924 18:21:01.689986   11602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 18:21:01.690011   11602 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 18:21:01.705932   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0924 18:21:01.705958   11602 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0924 18:21:01.740697   11602 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0924 18:21:01.740721   11602 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0924 18:21:01.775169   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 18:21:01.804259   11602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0924 18:21:01.804283   11602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0924 18:21:01.819198   11602 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0924 18:21:01.819230   11602 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0924 18:21:01.827355   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0924 18:21:01.855195   11602 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:21:01.855219   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0924 18:21:01.951137   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0924 18:21:01.951166   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0924 18:21:01.969440   11602 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0924 18:21:01.969463   11602 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0924 18:21:02.069888   11602 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0924 18:21:02.069915   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0924 18:21:02.099859   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:21:02.231068   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0924 18:21:02.231095   11602 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0924 18:21:02.305967   11602 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0924 18:21:02.305990   11602 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0924 18:21:02.390434   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0924 18:21:02.434755   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0924 18:21:02.434778   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0924 18:21:02.586683   11602 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0924 18:21:02.586715   11602 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0924 18:21:02.733250   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0924 18:21:02.733348   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0924 18:21:02.792924   11602 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 18:21:02.792950   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0924 18:21:03.055872   11602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 18:21:03.055895   11602 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0924 18:21:03.132217   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 18:21:03.134229   11602 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.208412456s)
	I0924 18:21:03.134255   11602 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0924 18:21:03.134280   11602 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.166538652s)
	I0924 18:21:03.134987   11602 node_ready.go:35] waiting up to 6m0s for node "addons-218885" to be "Ready" ...
	I0924 18:21:03.139952   11602 node_ready.go:49] node "addons-218885" has status "Ready":"True"
	I0924 18:21:03.139976   11602 node_ready.go:38] duration metric: took 4.969165ms for node "addons-218885" to be "Ready" ...
	I0924 18:21:03.139986   11602 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:21:03.150885   11602 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:03.433867   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 18:21:03.668937   11602 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-218885" context rescaled to 1 replicas
	I0924 18:21:03.814522   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.690406906s)
	I0924 18:21:03.814578   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:03.814590   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:03.814905   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:03.814918   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:03.814925   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:03.814936   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:03.814944   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:03.815212   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:03.815229   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:05.193674   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:07.675146   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:07.776279   11602 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0924 18:21:07.776319   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:07.779561   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:07.780040   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:07.780063   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:07.780297   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:07.780488   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:07.780661   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:07.780787   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:07.972822   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.833257544s)
	I0924 18:21:07.972874   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.972887   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.972834   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.742092294s)
	I0924 18:21:07.972905   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.713615593s)
	I0924 18:21:07.972935   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.972950   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.972937   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973033   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973066   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.567135616s)
	I0924 18:21:07.973034   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.582880208s)
	I0924 18:21:07.972999   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.678903665s)
	I0924 18:21:07.973102   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973112   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973145   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973154   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973200   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973225   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973225   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973227   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973230   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.198037335s)
	I0924 18:21:07.973239   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973240   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.973240   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973249   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973251   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973257   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973257   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.973262   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973267   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973263   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.14588148s)
	I0924 18:21:07.973276   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973283   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973292   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973374   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.873477591s)
	W0924 18:21:07.973425   11602 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0924 18:21:07.973466   11602 retry.go:31] will retry after 341.273334ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0924 18:21:07.973483   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973512   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973519   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.973526   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973532   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973533   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973543   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.973551   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973557   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973595   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.583073615s)
	I0924 18:21:07.973620   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973630   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973771   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973814   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973815   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.841563977s)
	I0924 18:21:07.973828   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.973844   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973850   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.973858   11602 addons.go:475] Verifying addon metrics-server=true in "addons-218885"
	I0924 18:21:07.973878   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.973891   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.973971   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.973979   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.974078   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.974087   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.974094   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.974100   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.974255   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.974275   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.974281   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.974287   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.974292   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.974331   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.974353   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.974359   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.974366   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.974373   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.974966   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.974991   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.974998   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.975194   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.975217   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.975223   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.975231   11602 addons.go:475] Verifying addon registry=true in "addons-218885"
	I0924 18:21:07.975723   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.975745   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.975768   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.975774   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.975931   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.975939   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.975946   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.975952   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.976518   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.976541   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.976548   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.976693   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.976707   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.976717   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.976725   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.976754   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.976765   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.976773   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:07.976780   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:07.976888   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.976902   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.976910   11602 addons.go:475] Verifying addon ingress=true in "addons-218885"
	I0924 18:21:07.976949   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.977417   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:07.977442   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.977448   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.976973   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.977553   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.978625   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:07.978641   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:07.979225   11602 out.go:177] * Verifying registry addon...
	I0924 18:21:07.979369   11602 out.go:177] * Verifying ingress addon...
	I0924 18:21:07.980099   11602 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-218885 service yakd-dashboard -n yakd-dashboard
	
	I0924 18:21:07.981598   11602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0924 18:21:07.981987   11602 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0924 18:21:07.999231   11602 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0924 18:21:07.999256   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:07.999600   11602 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0924 18:21:07.999619   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:08.005488   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:08.005509   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:08.005801   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:08.005847   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:08.005864   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:08.017897   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:08.017922   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:08.018287   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:08.018306   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:08.058607   11602 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0924 18:21:08.094929   11602 addons.go:234] Setting addon gcp-auth=true in "addons-218885"
	I0924 18:21:08.094992   11602 host.go:66] Checking if "addons-218885" exists ...
	I0924 18:21:08.095419   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:08.095475   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:08.110585   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34813
	I0924 18:21:08.111040   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:08.111584   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:08.111611   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:08.111964   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:08.112535   11602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:21:08.112578   11602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:21:08.127155   11602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0924 18:21:08.127631   11602 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:21:08.128121   11602 main.go:141] libmachine: Using API Version  1
	I0924 18:21:08.128146   11602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:21:08.128433   11602 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:21:08.128606   11602 main.go:141] libmachine: (addons-218885) Calling .GetState
	I0924 18:21:08.130080   11602 main.go:141] libmachine: (addons-218885) Calling .DriverName
	I0924 18:21:08.130278   11602 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0924 18:21:08.130305   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHHostname
	I0924 18:21:08.133126   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:08.133582   11602 main.go:141] libmachine: (addons-218885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:2a:e2", ip: ""} in network mk-addons-218885: {Iface:virbr1 ExpiryTime:2024-09-24 19:20:26 +0000 UTC Type:0 Mac:52:54:00:4f:2a:e2 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-218885 Clientid:01:52:54:00:4f:2a:e2}
	I0924 18:21:08.133611   11602 main.go:141] libmachine: (addons-218885) DBG | domain addons-218885 has defined IP address 192.168.39.215 and MAC address 52:54:00:4f:2a:e2 in network mk-addons-218885
	I0924 18:21:08.133777   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHPort
	I0924 18:21:08.133930   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHKeyPath
	I0924 18:21:08.134104   11602 main.go:141] libmachine: (addons-218885) Calling .GetSSHUsername
	I0924 18:21:08.134250   11602 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/addons-218885/id_rsa Username:docker}
	I0924 18:21:08.315216   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:21:08.488445   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:08.488845   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:09.002788   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:09.003393   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:09.077458   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.643536692s)
	I0924 18:21:09.077506   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:09.077519   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:09.077783   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:09.077837   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:09.077851   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:09.077853   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:09.077867   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:09.078166   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:09.078214   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:09.078225   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:09.078240   11602 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-218885"
	I0924 18:21:09.079280   11602 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0924 18:21:09.080127   11602 out.go:177] * Verifying csi-hostpath-driver addon...
	I0924 18:21:09.081849   11602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 18:21:09.082510   11602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0924 18:21:09.083069   11602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0924 18:21:09.083086   11602 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0924 18:21:09.113707   11602 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0924 18:21:09.113739   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:09.175252   11602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0924 18:21:09.175277   11602 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0924 18:21:09.215574   11602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 18:21:09.215599   11602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0924 18:21:09.270926   11602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 18:21:09.486696   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:09.486738   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:09.587547   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:09.986544   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:09.987121   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:10.087460   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:10.156758   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:10.264232   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.948944982s)
	I0924 18:21:10.264285   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:10.264299   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:10.264666   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:10.264719   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:10.264726   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:10.264738   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:10.264746   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:10.264961   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:10.264973   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:10.556445   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:10.559448   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:10.822097   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:10.873812   11602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.602842869s)
	I0924 18:21:10.873863   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:10.873886   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:10.874154   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:10.874174   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:10.874183   11602 main.go:141] libmachine: Making call to close driver server
	I0924 18:21:10.874191   11602 main.go:141] libmachine: (addons-218885) Calling .Close
	I0924 18:21:10.874219   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:10.874421   11602 main.go:141] libmachine: (addons-218885) DBG | Closing plugin on server side
	I0924 18:21:10.874465   11602 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:21:10.874474   11602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:21:10.876389   11602 addons.go:475] Verifying addon gcp-auth=true in "addons-218885"
	I0924 18:21:10.878112   11602 out.go:177] * Verifying gcp-auth addon...
	I0924 18:21:10.879991   11602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0924 18:21:10.914619   11602 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0924 18:21:10.914644   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:10.986616   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:10.987116   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:11.087458   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:11.383545   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:11.486763   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:11.486957   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:11.640030   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:11.884322   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:11.985458   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:11.986775   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:12.088092   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:12.156950   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:12.383370   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:12.485195   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:12.487941   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:12.587459   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:12.883672   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:12.986303   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:12.986526   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:13.087330   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:13.385285   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:13.485959   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:13.486129   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:13.586793   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:13.884002   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:13.985294   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:13.987442   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:14.087331   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:14.384138   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:14.485676   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:14.486525   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:14.587163   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:14.673311   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:14.883885   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:14.985667   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:14.987837   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:15.087254   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:15.538287   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:15.538499   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:15.538661   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:15.587780   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:15.883673   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:15.986434   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:15.986755   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:16.087600   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:16.161186   11602 pod_ready.go:98] pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:15 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.215 HostIPs:[{IP:192.168.39
.215}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-24 18:21:01 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-24 18:21:05 +0000 UTC,FinishedAt:2024-09-24 18:21:15 +0000 UTC,ContainerID:cri-o://1a7be4e5a265f4ca7f7b1e9046b67fbd27b0a2df0b4180e732ae601ee76e0003,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://1a7be4e5a265f4ca7f7b1e9046b67fbd27b0a2df0b4180e732ae601ee76e0003 Started:0xc0016c5620 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001432d90} {Name:kube-api-access-dx4gt MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001432da0}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0924 18:21:16.161211   11602 pod_ready.go:82] duration metric: took 13.010302575s for pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace to be "Ready" ...
	E0924 18:21:16.161224   11602 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-5cl8g" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:15 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-24 18:21:01 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.215 HostIPs:[{IP:192.168.39.215}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-24 18:21:01 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-24 18:21:05 +0000 UTC,FinishedAt:2024-09-24 18:21:15 +0000 UTC,ContainerID:cri-o://1a7be4e5a265f4ca7f7b1e9046b67fbd27b0a2df0b4180e732ae601ee76e0003,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://1a7be4e5a265f4ca7f7b1e9046b67fbd27b0a2df0b4180e732ae601ee76e0003 Started:0xc0016c5620 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001432d90} {Name:kube-api-access-dx4gt MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001432da0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0924 18:21:16.161239   11602 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:16.383548   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:16.486230   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:16.487442   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:16.586690   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:16.884631   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:16.986006   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:16.986774   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:17.087310   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:17.383898   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:17.486612   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:17.487453   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:17.586919   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:17.883638   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:17.987330   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:17.987849   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:18.089144   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:18.167517   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:18.383520   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:18.486806   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:18.486918   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:18.588925   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:18.883462   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:18.986014   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:18.986560   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:19.086554   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:19.383070   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:19.484874   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:19.486920   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:19.587560   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:19.883992   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:19.986152   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:19.987408   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:20.086874   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:20.383440   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:20.486268   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:20.486550   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:20.791631   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:20.793936   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:20.883763   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:20.986920   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:20.987056   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:21.088233   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:21.383254   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:21.486556   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:21.486845   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:21.587198   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:21.884631   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:21.986396   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:21.986589   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:22.087981   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:22.383307   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:22.486130   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:22.487114   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:22.587895   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:22.883205   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:22.986726   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:22.987810   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:23.087527   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:23.167137   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:23.382922   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:23.486893   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:23.487141   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:23.586653   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:23.887051   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:23.992735   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:23.993112   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:24.088192   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:24.384102   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:24.485524   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:24.486088   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:24.588291   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:24.883718   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:24.986064   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:24.986669   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:25.086972   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:25.167765   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:25.385694   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:25.487039   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:25.487327   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:25.587485   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:25.883440   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:25.987089   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:25.987473   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:26.087677   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:26.383334   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:26.486844   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:26.487823   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:26.586734   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:26.883494   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:26.986274   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:26.986679   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:27.087587   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:27.383764   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:27.486172   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:27.486167   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:27.586436   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:27.667175   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:27.883579   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:27.986382   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:27.986773   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:28.086697   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:28.383293   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:28.493330   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:28.505220   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:28.586892   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:28.883915   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:28.985128   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:28.986961   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:29.086970   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:29.382946   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:29.485425   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:29.487089   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:29.587540   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:29.670087   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:29.884302   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:29.985838   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:29.986275   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:30.086421   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:30.385253   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:30.485483   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:30.486689   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:30.588361   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:30.883735   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:30.986783   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:30.987125   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:31.088911   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:31.385049   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:31.486543   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:31.486992   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:31.587160   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:31.883656   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:31.985711   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:31.986231   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:32.086502   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:32.167554   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:32.384448   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:32.486308   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:32.486463   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:32.587554   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:32.883253   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:32.987205   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:32.987734   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:33.087771   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:33.384995   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:33.486934   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:33.487318   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:33.586663   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:33.884321   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:33.986319   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:33.987702   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:34.087618   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:34.168690   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:34.387765   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:34.485791   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:34.486938   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:34.587761   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:34.884048   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:34.985832   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:34.986032   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:35.087501   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:35.386323   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:35.486147   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:35.486397   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:35.586931   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:35.884466   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:35.987056   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:35.987253   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:36.086959   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:36.383855   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:36.486473   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:36.486749   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:36.586935   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:36.667520   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:36.884713   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:36.985614   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:36.987395   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:37.094813   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:37.383846   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:37.486004   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:37.486280   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:37.588888   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:38.231455   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:38.234409   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:38.239132   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:38.239417   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:38.383733   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:38.486322   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:38.486594   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:38.587058   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:38.667664   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:38.883555   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:38.986183   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:38.986218   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:39.086393   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:39.383891   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:39.485904   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:39.486274   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:39.586892   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:39.883990   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:39.985035   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:39.986333   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:40.086738   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:40.383797   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:40.486109   11602 kapi.go:107] duration metric: took 32.504507933s to wait for kubernetes.io/minikube-addons=registry ...
	I0924 18:21:40.486350   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:40.586745   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:40.882856   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:40.986205   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:41.086472   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:41.167497   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:41.384061   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:41.486569   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:41.587079   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:41.883691   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:41.987379   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:42.086661   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:42.592448   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:42.593329   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:42.593353   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:42.884026   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:42.986740   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:43.087210   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:43.384130   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:43.486932   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:43.587734   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:43.671555   11602 pod_ready.go:103] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"False"
	I0924 18:21:43.884139   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:43.986534   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:44.087447   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:44.383601   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:44.486943   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:44.587092   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:44.883703   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:44.986744   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:45.086822   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:45.384617   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:45.486345   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:45.586804   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:45.674703   11602 pod_ready.go:93] pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:45.674728   11602 pod_ready.go:82] duration metric: took 29.513479171s for pod "coredns-7c65d6cfc9-wbgv9" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.674737   11602 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.682099   11602 pod_ready.go:93] pod "etcd-addons-218885" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:45.682125   11602 pod_ready.go:82] duration metric: took 7.380934ms for pod "etcd-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.682136   11602 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.727932   11602 pod_ready.go:93] pod "kube-apiserver-addons-218885" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:45.727960   11602 pod_ready.go:82] duration metric: took 45.815667ms for pod "kube-apiserver-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.727973   11602 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.736186   11602 pod_ready.go:93] pod "kube-controller-manager-addons-218885" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:45.736205   11602 pod_ready.go:82] duration metric: took 8.225404ms for pod "kube-controller-manager-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.736216   11602 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jsjnj" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.741087   11602 pod_ready.go:93] pod "kube-proxy-jsjnj" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:45.741103   11602 pod_ready.go:82] duration metric: took 4.881511ms for pod "kube-proxy-jsjnj" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.741111   11602 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:45.883401   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:45.988310   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:46.066604   11602 pod_ready.go:93] pod "kube-scheduler-addons-218885" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:46.066631   11602 pod_ready.go:82] duration metric: took 325.512397ms for pod "kube-scheduler-addons-218885" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:46.066644   11602 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-qhkcp" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:46.087500   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:46.384729   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:46.465983   11602 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-qhkcp" in "kube-system" namespace has status "Ready":"True"
	I0924 18:21:46.466004   11602 pod_ready.go:82] duration metric: took 399.352493ms for pod "nvidia-device-plugin-daemonset-qhkcp" in "kube-system" namespace to be "Ready" ...
	I0924 18:21:46.466012   11602 pod_ready.go:39] duration metric: took 43.326012607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:21:46.466029   11602 api_server.go:52] waiting for apiserver process to appear ...
	I0924 18:21:46.466084   11602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:21:46.483386   11602 api_server.go:72] duration metric: took 45.824195071s to wait for apiserver process to appear ...
	I0924 18:21:46.483405   11602 api_server.go:88] waiting for apiserver healthz status ...
	I0924 18:21:46.483425   11602 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I0924 18:21:46.486475   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:46.489100   11602 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I0924 18:21:46.490451   11602 api_server.go:141] control plane version: v1.31.1
	I0924 18:21:46.490474   11602 api_server.go:131] duration metric: took 7.061904ms to wait for apiserver health ...
	I0924 18:21:46.490484   11602 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 18:21:46.588064   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:46.672865   11602 system_pods.go:59] 17 kube-system pods found
	I0924 18:21:46.672904   11602 system_pods.go:61] "coredns-7c65d6cfc9-wbgv9" [b793eb56-d95a-49f1-8294-1ab4837d5d36] Running
	I0924 18:21:46.672916   11602 system_pods.go:61] "csi-hostpath-attacher-0" [f054c47d-be0e-47ac-bb9a-665fff0e4ccc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0924 18:21:46.672926   11602 system_pods.go:61] "csi-hostpath-resizer-0" [aea59387-31a8-4570-aa63-aaa5b6a54eb7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0924 18:21:46.672936   11602 system_pods.go:61] "csi-hostpathplugin-rjjfm" [1af2c700-d42a-499b-89c3-badfa6dae8c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0924 18:21:46.672942   11602 system_pods.go:61] "etcd-addons-218885" [a288635e-a61b-4c7d-b1dc-90910c161b87] Running
	I0924 18:21:46.672948   11602 system_pods.go:61] "kube-apiserver-addons-218885" [af891cb5-c6e3-43c5-a480-76844da48620] Running
	I0924 18:21:46.672954   11602 system_pods.go:61] "kube-controller-manager-addons-218885" [2df23cca-721a-4fe5-8c91-8c3207ce708e] Running
	I0924 18:21:46.672962   11602 system_pods.go:61] "kube-ingress-dns-minikube" [209a83c9-7b47-44e1-8897-682ab287a114] Running
	I0924 18:21:46.672971   11602 system_pods.go:61] "kube-proxy-jsjnj" [07996bfd-1ae9-4e9c-9148-14966458de66] Running
	I0924 18:21:46.672979   11602 system_pods.go:61] "kube-scheduler-addons-218885" [43c814bf-b252-4f9f-a5e1-50a0e68c2ff3] Running
	I0924 18:21:46.672987   11602 system_pods.go:61] "metrics-server-84c5f94fbc-pkzn4" [65ed5b0c-3307-4c48-b8dc-666848d353fc] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 18:21:46.672995   11602 system_pods.go:61] "nvidia-device-plugin-daemonset-qhkcp" [2d4afd4b-8f05-4a66-aecf-ac6db891b2a7] Running
	I0924 18:21:46.673003   11602 system_pods.go:61] "registry-66c9cd494c-b94p9" [bb39eff0-510f-4e28-b3b7-a246e7ca880c] Running
	I0924 18:21:46.673007   11602 system_pods.go:61] "registry-proxy-wpjp5" [e715cd68-83d0-4850-abc2-b9a3f139e6f8] Running
	I0924 18:21:46.673014   11602 system_pods.go:61] "snapshot-controller-56fcc65765-775tk" [a08ba94c-acb8-4274-8018-b576a56c94f1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:21:46.673022   11602 system_pods.go:61] "snapshot-controller-56fcc65765-q2xbm" [8316fcb5-fc58-46a5-821d-790e06ea09ed] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:21:46.673027   11602 system_pods.go:61] "storage-provisioner" [43a66ff5-32a5-4cdb-9073-da217f1138f1] Running
	I0924 18:21:46.673035   11602 system_pods.go:74] duration metric: took 182.544371ms to wait for pod list to return data ...
	I0924 18:21:46.673044   11602 default_sa.go:34] waiting for default service account to be created ...
	I0924 18:21:46.864990   11602 default_sa.go:45] found service account: "default"
	I0924 18:21:46.865016   11602 default_sa.go:55] duration metric: took 191.965785ms for default service account to be created ...
	I0924 18:21:46.865028   11602 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 18:21:46.884297   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:46.986602   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:47.070157   11602 system_pods.go:86] 17 kube-system pods found
	I0924 18:21:47.070185   11602 system_pods.go:89] "coredns-7c65d6cfc9-wbgv9" [b793eb56-d95a-49f1-8294-1ab4837d5d36] Running
	I0924 18:21:47.070195   11602 system_pods.go:89] "csi-hostpath-attacher-0" [f054c47d-be0e-47ac-bb9a-665fff0e4ccc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0924 18:21:47.070203   11602 system_pods.go:89] "csi-hostpath-resizer-0" [aea59387-31a8-4570-aa63-aaa5b6a54eb7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0924 18:21:47.070211   11602 system_pods.go:89] "csi-hostpathplugin-rjjfm" [1af2c700-d42a-499b-89c3-badfa6dae8c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0924 18:21:47.070215   11602 system_pods.go:89] "etcd-addons-218885" [a288635e-a61b-4c7d-b1dc-90910c161b87] Running
	I0924 18:21:47.070219   11602 system_pods.go:89] "kube-apiserver-addons-218885" [af891cb5-c6e3-43c5-a480-76844da48620] Running
	I0924 18:21:47.070223   11602 system_pods.go:89] "kube-controller-manager-addons-218885" [2df23cca-721a-4fe5-8c91-8c3207ce708e] Running
	I0924 18:21:47.070226   11602 system_pods.go:89] "kube-ingress-dns-minikube" [209a83c9-7b47-44e1-8897-682ab287a114] Running
	I0924 18:21:47.070229   11602 system_pods.go:89] "kube-proxy-jsjnj" [07996bfd-1ae9-4e9c-9148-14966458de66] Running
	I0924 18:21:47.070232   11602 system_pods.go:89] "kube-scheduler-addons-218885" [43c814bf-b252-4f9f-a5e1-50a0e68c2ff3] Running
	I0924 18:21:47.070237   11602 system_pods.go:89] "metrics-server-84c5f94fbc-pkzn4" [65ed5b0c-3307-4c48-b8dc-666848d353fc] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 18:21:47.070240   11602 system_pods.go:89] "nvidia-device-plugin-daemonset-qhkcp" [2d4afd4b-8f05-4a66-aecf-ac6db891b2a7] Running
	I0924 18:21:47.070243   11602 system_pods.go:89] "registry-66c9cd494c-b94p9" [bb39eff0-510f-4e28-b3b7-a246e7ca880c] Running
	I0924 18:21:47.070246   11602 system_pods.go:89] "registry-proxy-wpjp5" [e715cd68-83d0-4850-abc2-b9a3f139e6f8] Running
	I0924 18:21:47.070253   11602 system_pods.go:89] "snapshot-controller-56fcc65765-775tk" [a08ba94c-acb8-4274-8018-b576a56c94f1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:21:47.070257   11602 system_pods.go:89] "snapshot-controller-56fcc65765-q2xbm" [8316fcb5-fc58-46a5-821d-790e06ea09ed] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:21:47.070261   11602 system_pods.go:89] "storage-provisioner" [43a66ff5-32a5-4cdb-9073-da217f1138f1] Running
	I0924 18:21:47.070266   11602 system_pods.go:126] duration metric: took 205.232474ms to wait for k8s-apps to be running ...
	I0924 18:21:47.070273   11602 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 18:21:47.070316   11602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:21:47.087696   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:47.088486   11602 system_svc.go:56] duration metric: took 18.204875ms WaitForService to wait for kubelet
	I0924 18:21:47.088509   11602 kubeadm.go:582] duration metric: took 46.429320046s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:21:47.088529   11602 node_conditions.go:102] verifying NodePressure condition ...
	I0924 18:21:47.266397   11602 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:21:47.266422   11602 node_conditions.go:123] node cpu capacity is 2
	I0924 18:21:47.266433   11602 node_conditions.go:105] duration metric: took 177.899279ms to run NodePressure ...
	I0924 18:21:47.266444   11602 start.go:241] waiting for startup goroutines ...
	I0924 18:21:47.383807   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:47.486627   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:47.592685   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:47.882809   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:47.988953   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:48.088085   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:48.384495   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:48.486920   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:48.587547   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:48.884003   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:48.986521   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:49.089118   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:49.384064   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:49.487365   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:49.586764   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:49.883741   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:49.986565   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:50.086791   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:50.383210   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:50.486863   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:50.586794   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:50.883384   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:50.986147   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:51.087529   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:51.383646   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:51.487904   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:51.587015   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:51.883461   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:51.986235   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:52.087462   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:52.383965   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:52.485684   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:52.586927   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:53.043269   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:53.044081   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:53.086805   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:53.384041   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:53.489996   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:53.588300   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:53.884430   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:53.986023   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:54.088358   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:54.384017   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:54.486355   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:54.587249   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:54.883465   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:54.986368   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:55.088397   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:55.387044   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:55.486136   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:55.587101   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:55.883331   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:55.986435   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:56.086566   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:56.383493   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:56.486431   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:56.587234   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:56.884841   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:56.986911   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:57.088106   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:57.384206   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:57.487256   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:57.587982   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:57.884019   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:57.994140   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:58.095443   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:58.383978   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:58.486983   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:58.587545   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:58.883975   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:58.986500   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:59.087389   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:59.388016   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:59.487717   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:21:59.591066   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:59.884701   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:59.986927   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:00.089353   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:00.385499   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:00.491326   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:00.586790   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:00.884136   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:00.986787   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:01.089833   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:01.388730   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:01.502425   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:01.597581   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:01.884562   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:01.989808   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:02.089518   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:02.384237   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:02.486541   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:02.587146   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:03.079446   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:03.080120   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:03.087562   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:03.383714   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:03.486549   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:03.587281   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:03.884126   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:03.987082   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:04.094340   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:04.384081   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:04.486442   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:04.586869   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:04.883281   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:04.985346   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:05.086875   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:05.385212   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:05.487246   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:05.587182   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:05.886629   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:05.987975   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:06.087851   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:06.383918   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:06.487588   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:06.587475   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:06.883377   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:06.986090   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:07.087419   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:07.384451   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:07.487315   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:07.588370   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:22:07.884884   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:07.988441   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:08.088256   11602 kapi.go:107] duration metric: took 59.005743641s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0924 18:22:08.384288   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:08.486671   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:08.883496   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:08.986150   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:09.384140   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:09.486763   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:09.883529   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:09.985845   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:10.383692   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:10.485952   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:10.883625   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:10.986197   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:11.383715   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:11.486007   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:11.883706   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:11.986310   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:12.383898   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:12.485858   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:12.883805   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:12.986764   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:13.385789   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:13.488283   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:13.884377   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:13.987274   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:14.386814   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:14.487614   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:14.884301   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:14.986008   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:15.385093   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:15.486500   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:15.884358   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:15.985775   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:16.383761   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:16.486006   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:16.883791   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:16.986849   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:17.592172   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:17.592689   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:17.883336   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:17.986491   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:18.383313   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:18.485567   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:18.883401   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:18.988696   11602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 18:22:19.384325   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:19.485836   11602 kapi.go:107] duration metric: took 1m11.503845867s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0924 18:22:19.883804   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:20.442509   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:20.884372   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:21.384165   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:21.883778   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:22.383574   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:22.883482   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:23.384312   11602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:23.884604   11602 kapi.go:107] duration metric: took 1m13.004608549s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0924 18:22:23.886195   11602 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-218885 cluster.
	I0924 18:22:23.887597   11602 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0924 18:22:23.888920   11602 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0924 18:22:23.890409   11602 out.go:177] * Enabled addons: cloud-spanner, metrics-server, ingress-dns, storage-provisioner, inspektor-gadget, nvidia-device-plugin, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0924 18:22:23.891803   11602 addons.go:510] duration metric: took 1m23.232581307s for enable addons: enabled=[cloud-spanner metrics-server ingress-dns storage-provisioner inspektor-gadget nvidia-device-plugin yakd default-storageclass storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0924 18:22:23.891846   11602 start.go:246] waiting for cluster config update ...
	I0924 18:22:23.891861   11602 start.go:255] writing updated cluster config ...
	I0924 18:22:23.892111   11602 ssh_runner.go:195] Run: rm -f paused
	I0924 18:22:23.942645   11602 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 18:22:23.944149   11602 out.go:177] * Done! kubectl is now configured to use "addons-218885" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.700078319Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202993700048871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3acd9ded-3fcd-4a73-939b-be294baf4f19 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.704655792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44d3d83a-86c5-456f-a2ff-6312c3ca1afa name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.704842038Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44d3d83a-86c5-456f-a2ff-6312c3ca1afa name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.705275894Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74747d3759d103a0a3e685c43e006ddc40846979226acccd1bc86090ce606584,PodSandboxId:4de634f477b2153df6bf5881fd9c39cfb190ac96e690e53ae1c7bb836d1e4379,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727202827448863915,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-6h8qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5dbb2ff2-a88e-47dd-98ff-788c8d9f990b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:337fc816891f4279c782784f0736511194960e61cbf45a008fd0532d15a7508f,PodSandboxId:8513afce7fba1336b2e20f58d26c786b63875bac35237991e38fea09faab1f92,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1727202711273209120,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-5nkmt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 59e6f1f0-361c-4bc4-bdad-ee140581d073,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c6ac506dcfcf9cfa1fff11e877c73a7301129a6ecf5e053b30759b7d99cc78,PodSandboxId:c37c20e05e8843140c81b8018052241f2ed0e7c0fbd6e88c98b7ac0e926a9ade,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727202687633342951,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: dcc9442c-a1e0-46a5-9db8-d027ceac1950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c303d9afee770ae2eda2d8cb9e029e10d46565d5f87e1aacf30f6dad3e3d41cd,PodSandboxId:2c67c0d05137e0ec73851612ff2185170aa276ea320e2cd8f4a6a0a71ef88192,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727202142413103666,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b9jr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 93deada6-273f-48ac-b9de-c15825530c1f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e31517907a7d9551c8e9e7375fef9f817b0dae3745b7f7e6a359481276a5fe,PodSandboxId:059713d5f2942e8ede8fc84f87dc8a62a21ac3405d06191bae2cd7462fcdbbd3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONT
AINER_RUNNING,CreatedAt:1727202093707404343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-pkzn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ed5b0c-3307-4c48-b8dc-666848d353fc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892df4e49ab85492657cc5d1c8404bc9bbdf9a850ff6a877c13f1f0bde448d34,PodSandboxId:d3cf49536a775059e28f2fa79ab3e48e4f327be0dbae35610a556ce41401a89e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727202066494013597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a66ff5-32a5-4cdb-9073-da217f1138f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be47175c23bbde37487f56b66baa578e3060c0189a63f4156cdba80a73738ae,PodSandboxId:f51323ffd92af2f11b2fd105dbd855797a82c930e423375749cf00aa81144f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2
e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727202064699970678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wbgv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b793eb56-d95a-49f1-8294-1ab4837d5d36,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05055f26daa39aed87cf7d873b1526912f0d56ac562bd79c217b2c5c135531c3,PodSandboxId:9379757a98736e16ad81c9443b123800a6ebac50f70eacbfaf35713582566dad,Metadata:&ContainerMetadata{Nam
e:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727202062391497092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jsjnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07996bfd-1ae9-4e9c-9148-14966458de66,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5872d2d84daecbb7286168900d232aaa3de8d6e2a7efd42d1a21e79e7716fbef,PodSandboxId:a466770b867e2fc0fa51cb8add999d3390779206baa1cb35a345264f62e6c93d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Imag
e:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727202051179093922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58f7cca19e2f3ff0b4c700a54c6c183,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aed06020fea288161b94609946ad26fe5fb9c066d4e615a8ce8107d8e36cb5,PodSandboxId:5b729a73d998d0556f4387c9284e0af77999474f4c03ea46ed8185ac3f9119f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727202051180588384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da642d5c94c7507578558cbed0fc241c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45900bdb84120d2b0e5a5dcb15a77d3adc41436c4a8c297983d2dc3c3e33a93,PodSandboxId:f744f09f310f16d040ffd6b58c12920fdfadee9f122e2dcd163800338f47d777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a9
1c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727202051150395297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6442ad70ea7295b7e243b2fa7ca3de8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176b7e7ab3b8a771f88df4a1857f2fe15185c86635661cfc3d36f9a276a729de,PodSandboxId:8363686ba5d19700b174e56d4d3ac206d9a10b4586cc9fe13b9cbed3d0656fa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f31
35b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727202051107614012,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc93ecbe7d2f9eee0e6aa527b58ce9c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44d3d83a-86c5-456f-a2ff-6312c3ca1afa name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.740783547Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d978302-d1fa-47d6-b1e9-09e8f4c00fa5 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.740857388Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d978302-d1fa-47d6-b1e9-09e8f4c00fa5 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.741959067Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9af1f4ef-47df-49f7-90ce-24e394374c8b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.743013745Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202993742989239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9af1f4ef-47df-49f7-90ce-24e394374c8b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.743416347Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d639bc9-419a-44ac-bea5-11cd88918a98 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.743494553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d639bc9-419a-44ac-bea5-11cd88918a98 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.743737367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74747d3759d103a0a3e685c43e006ddc40846979226acccd1bc86090ce606584,PodSandboxId:4de634f477b2153df6bf5881fd9c39cfb190ac96e690e53ae1c7bb836d1e4379,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727202827448863915,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-6h8qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5dbb2ff2-a88e-47dd-98ff-788c8d9f990b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:337fc816891f4279c782784f0736511194960e61cbf45a008fd0532d15a7508f,PodSandboxId:8513afce7fba1336b2e20f58d26c786b63875bac35237991e38fea09faab1f92,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1727202711273209120,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-5nkmt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 59e6f1f0-361c-4bc4-bdad-ee140581d073,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c6ac506dcfcf9cfa1fff11e877c73a7301129a6ecf5e053b30759b7d99cc78,PodSandboxId:c37c20e05e8843140c81b8018052241f2ed0e7c0fbd6e88c98b7ac0e926a9ade,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727202687633342951,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: dcc9442c-a1e0-46a5-9db8-d027ceac1950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c303d9afee770ae2eda2d8cb9e029e10d46565d5f87e1aacf30f6dad3e3d41cd,PodSandboxId:2c67c0d05137e0ec73851612ff2185170aa276ea320e2cd8f4a6a0a71ef88192,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727202142413103666,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b9jr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 93deada6-273f-48ac-b9de-c15825530c1f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e31517907a7d9551c8e9e7375fef9f817b0dae3745b7f7e6a359481276a5fe,PodSandboxId:059713d5f2942e8ede8fc84f87dc8a62a21ac3405d06191bae2cd7462fcdbbd3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONT
AINER_RUNNING,CreatedAt:1727202093707404343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-pkzn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ed5b0c-3307-4c48-b8dc-666848d353fc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892df4e49ab85492657cc5d1c8404bc9bbdf9a850ff6a877c13f1f0bde448d34,PodSandboxId:d3cf49536a775059e28f2fa79ab3e48e4f327be0dbae35610a556ce41401a89e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727202066494013597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a66ff5-32a5-4cdb-9073-da217f1138f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be47175c23bbde37487f56b66baa578e3060c0189a63f4156cdba80a73738ae,PodSandboxId:f51323ffd92af2f11b2fd105dbd855797a82c930e423375749cf00aa81144f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2
e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727202064699970678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wbgv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b793eb56-d95a-49f1-8294-1ab4837d5d36,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05055f26daa39aed87cf7d873b1526912f0d56ac562bd79c217b2c5c135531c3,PodSandboxId:9379757a98736e16ad81c9443b123800a6ebac50f70eacbfaf35713582566dad,Metadata:&ContainerMetadata{Nam
e:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727202062391497092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jsjnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07996bfd-1ae9-4e9c-9148-14966458de66,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5872d2d84daecbb7286168900d232aaa3de8d6e2a7efd42d1a21e79e7716fbef,PodSandboxId:a466770b867e2fc0fa51cb8add999d3390779206baa1cb35a345264f62e6c93d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Imag
e:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727202051179093922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58f7cca19e2f3ff0b4c700a54c6c183,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aed06020fea288161b94609946ad26fe5fb9c066d4e615a8ce8107d8e36cb5,PodSandboxId:5b729a73d998d0556f4387c9284e0af77999474f4c03ea46ed8185ac3f9119f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727202051180588384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da642d5c94c7507578558cbed0fc241c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45900bdb84120d2b0e5a5dcb15a77d3adc41436c4a8c297983d2dc3c3e33a93,PodSandboxId:f744f09f310f16d040ffd6b58c12920fdfadee9f122e2dcd163800338f47d777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a9
1c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727202051150395297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6442ad70ea7295b7e243b2fa7ca3de8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176b7e7ab3b8a771f88df4a1857f2fe15185c86635661cfc3d36f9a276a729de,PodSandboxId:8363686ba5d19700b174e56d4d3ac206d9a10b4586cc9fe13b9cbed3d0656fa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f31
35b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727202051107614012,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc93ecbe7d2f9eee0e6aa527b58ce9c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d639bc9-419a-44ac-bea5-11cd88918a98 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.780318085Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae5a1e62-571b-4df1-8e49-af4db792fd31 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.780420726Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae5a1e62-571b-4df1-8e49-af4db792fd31 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.781461722Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0711217c-144d-4976-9261-5b4d829224d2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.782710725Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202993782685827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0711217c-144d-4976-9261-5b4d829224d2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.783487965Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94f37101-cd54-4f5f-9ec7-b5ab251ea72d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.783554295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94f37101-cd54-4f5f-9ec7-b5ab251ea72d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.783838929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74747d3759d103a0a3e685c43e006ddc40846979226acccd1bc86090ce606584,PodSandboxId:4de634f477b2153df6bf5881fd9c39cfb190ac96e690e53ae1c7bb836d1e4379,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727202827448863915,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-6h8qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5dbb2ff2-a88e-47dd-98ff-788c8d9f990b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:337fc816891f4279c782784f0736511194960e61cbf45a008fd0532d15a7508f,PodSandboxId:8513afce7fba1336b2e20f58d26c786b63875bac35237991e38fea09faab1f92,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1727202711273209120,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-5nkmt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 59e6f1f0-361c-4bc4-bdad-ee140581d073,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c6ac506dcfcf9cfa1fff11e877c73a7301129a6ecf5e053b30759b7d99cc78,PodSandboxId:c37c20e05e8843140c81b8018052241f2ed0e7c0fbd6e88c98b7ac0e926a9ade,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727202687633342951,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: dcc9442c-a1e0-46a5-9db8-d027ceac1950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c303d9afee770ae2eda2d8cb9e029e10d46565d5f87e1aacf30f6dad3e3d41cd,PodSandboxId:2c67c0d05137e0ec73851612ff2185170aa276ea320e2cd8f4a6a0a71ef88192,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727202142413103666,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b9jr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 93deada6-273f-48ac-b9de-c15825530c1f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e31517907a7d9551c8e9e7375fef9f817b0dae3745b7f7e6a359481276a5fe,PodSandboxId:059713d5f2942e8ede8fc84f87dc8a62a21ac3405d06191bae2cd7462fcdbbd3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONT
AINER_RUNNING,CreatedAt:1727202093707404343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-pkzn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ed5b0c-3307-4c48-b8dc-666848d353fc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892df4e49ab85492657cc5d1c8404bc9bbdf9a850ff6a877c13f1f0bde448d34,PodSandboxId:d3cf49536a775059e28f2fa79ab3e48e4f327be0dbae35610a556ce41401a89e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727202066494013597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a66ff5-32a5-4cdb-9073-da217f1138f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be47175c23bbde37487f56b66baa578e3060c0189a63f4156cdba80a73738ae,PodSandboxId:f51323ffd92af2f11b2fd105dbd855797a82c930e423375749cf00aa81144f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2
e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727202064699970678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wbgv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b793eb56-d95a-49f1-8294-1ab4837d5d36,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05055f26daa39aed87cf7d873b1526912f0d56ac562bd79c217b2c5c135531c3,PodSandboxId:9379757a98736e16ad81c9443b123800a6ebac50f70eacbfaf35713582566dad,Metadata:&ContainerMetadata{Nam
e:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727202062391497092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jsjnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07996bfd-1ae9-4e9c-9148-14966458de66,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5872d2d84daecbb7286168900d232aaa3de8d6e2a7efd42d1a21e79e7716fbef,PodSandboxId:a466770b867e2fc0fa51cb8add999d3390779206baa1cb35a345264f62e6c93d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Imag
e:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727202051179093922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58f7cca19e2f3ff0b4c700a54c6c183,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aed06020fea288161b94609946ad26fe5fb9c066d4e615a8ce8107d8e36cb5,PodSandboxId:5b729a73d998d0556f4387c9284e0af77999474f4c03ea46ed8185ac3f9119f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727202051180588384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da642d5c94c7507578558cbed0fc241c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45900bdb84120d2b0e5a5dcb15a77d3adc41436c4a8c297983d2dc3c3e33a93,PodSandboxId:f744f09f310f16d040ffd6b58c12920fdfadee9f122e2dcd163800338f47d777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a9
1c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727202051150395297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6442ad70ea7295b7e243b2fa7ca3de8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176b7e7ab3b8a771f88df4a1857f2fe15185c86635661cfc3d36f9a276a729de,PodSandboxId:8363686ba5d19700b174e56d4d3ac206d9a10b4586cc9fe13b9cbed3d0656fa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f31
35b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727202051107614012,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc93ecbe7d2f9eee0e6aa527b58ce9c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94f37101-cd54-4f5f-9ec7-b5ab251ea72d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.821699075Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62031900-5239-41c8-bc93-190d9247becf name=/runtime.v1.RuntimeService/Version
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.821790361Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62031900-5239-41c8-bc93-190d9247becf name=/runtime.v1.RuntimeService/Version
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.822554131Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7267404b-bd07-4a7f-8b5e-9c8aebfa0163 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.823624658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202993823597841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7267404b-bd07-4a7f-8b5e-9c8aebfa0163 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.824397086Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16b670a0-6229-49bf-8ca3-c874645ac2f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.824461229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16b670a0-6229-49bf-8ca3-c874645ac2f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:36:33 addons-218885 crio[662]: time="2024-09-24 18:36:33.824700853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74747d3759d103a0a3e685c43e006ddc40846979226acccd1bc86090ce606584,PodSandboxId:4de634f477b2153df6bf5881fd9c39cfb190ac96e690e53ae1c7bb836d1e4379,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727202827448863915,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-6h8qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5dbb2ff2-a88e-47dd-98ff-788c8d9f990b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:337fc816891f4279c782784f0736511194960e61cbf45a008fd0532d15a7508f,PodSandboxId:8513afce7fba1336b2e20f58d26c786b63875bac35237991e38fea09faab1f92,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1727202711273209120,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-5nkmt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 59e6f1f0-361c-4bc4-bdad-ee140581d073,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c6ac506dcfcf9cfa1fff11e877c73a7301129a6ecf5e053b30759b7d99cc78,PodSandboxId:c37c20e05e8843140c81b8018052241f2ed0e7c0fbd6e88c98b7ac0e926a9ade,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727202687633342951,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: dcc9442c-a1e0-46a5-9db8-d027ceac1950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c303d9afee770ae2eda2d8cb9e029e10d46565d5f87e1aacf30f6dad3e3d41cd,PodSandboxId:2c67c0d05137e0ec73851612ff2185170aa276ea320e2cd8f4a6a0a71ef88192,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727202142413103666,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b9jr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 93deada6-273f-48ac-b9de-c15825530c1f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e31517907a7d9551c8e9e7375fef9f817b0dae3745b7f7e6a359481276a5fe,PodSandboxId:059713d5f2942e8ede8fc84f87dc8a62a21ac3405d06191bae2cd7462fcdbbd3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONT
AINER_RUNNING,CreatedAt:1727202093707404343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-pkzn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65ed5b0c-3307-4c48-b8dc-666848d353fc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892df4e49ab85492657cc5d1c8404bc9bbdf9a850ff6a877c13f1f0bde448d34,PodSandboxId:d3cf49536a775059e28f2fa79ab3e48e4f327be0dbae35610a556ce41401a89e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727202066494013597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a66ff5-32a5-4cdb-9073-da217f1138f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be47175c23bbde37487f56b66baa578e3060c0189a63f4156cdba80a73738ae,PodSandboxId:f51323ffd92af2f11b2fd105dbd855797a82c930e423375749cf00aa81144f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2
e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727202064699970678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wbgv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b793eb56-d95a-49f1-8294-1ab4837d5d36,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05055f26daa39aed87cf7d873b1526912f0d56ac562bd79c217b2c5c135531c3,PodSandboxId:9379757a98736e16ad81c9443b123800a6ebac50f70eacbfaf35713582566dad,Metadata:&ContainerMetadata{Nam
e:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727202062391497092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jsjnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07996bfd-1ae9-4e9c-9148-14966458de66,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5872d2d84daecbb7286168900d232aaa3de8d6e2a7efd42d1a21e79e7716fbef,PodSandboxId:a466770b867e2fc0fa51cb8add999d3390779206baa1cb35a345264f62e6c93d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Imag
e:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727202051179093922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58f7cca19e2f3ff0b4c700a54c6c183,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aed06020fea288161b94609946ad26fe5fb9c066d4e615a8ce8107d8e36cb5,PodSandboxId:5b729a73d998d0556f4387c9284e0af77999474f4c03ea46ed8185ac3f9119f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727202051180588384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da642d5c94c7507578558cbed0fc241c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45900bdb84120d2b0e5a5dcb15a77d3adc41436c4a8c297983d2dc3c3e33a93,PodSandboxId:f744f09f310f16d040ffd6b58c12920fdfadee9f122e2dcd163800338f47d777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a9
1c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727202051150395297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6442ad70ea7295b7e243b2fa7ca3de8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176b7e7ab3b8a771f88df4a1857f2fe15185c86635661cfc3d36f9a276a729de,PodSandboxId:8363686ba5d19700b174e56d4d3ac206d9a10b4586cc9fe13b9cbed3d0656fa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f31
35b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727202051107614012,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-218885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc93ecbe7d2f9eee0e6aa527b58ce9c2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16b670a0-6229-49bf-8ca3-c874645ac2f7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	74747d3759d10       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   4de634f477b21       hello-world-app-55bf9c44b4-6h8qp
	337fc816891f4       ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a                   4 minutes ago       Running             headlamp                  0                   8513afce7fba1       headlamp-7b5c95b59d-5nkmt
	d6c6ac506dcfc       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         5 minutes ago       Running             nginx                     0                   c37c20e05e884       nginx
	c303d9afee770       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            14 minutes ago      Running             gcp-auth                  0                   2c67c0d05137e       gcp-auth-89d5ffd79-b9jr2
	70e31517907a7       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   15 minutes ago      Running             metrics-server            0                   059713d5f2942       metrics-server-84c5f94fbc-pkzn4
	892df4e49ab85       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago      Running             storage-provisioner       0                   d3cf49536a775       storage-provisioner
	7be47175c23bb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        15 minutes ago      Running             coredns                   0                   f51323ffd92af       coredns-7c65d6cfc9-wbgv9
	05055f26daa39       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago      Running             kube-proxy                0                   9379757a98736       kube-proxy-jsjnj
	01aed06020fea       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago      Running             etcd                      0                   5b729a73d998d       etcd-addons-218885
	5872d2d84daec       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago      Running             kube-scheduler            0                   a466770b867e2       kube-scheduler-addons-218885
	b45900bdb8412       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago      Running             kube-apiserver            0                   f744f09f310f1       kube-apiserver-addons-218885
	176b7e7ab3b8a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago      Running             kube-controller-manager   0                   8363686ba5d19       kube-controller-manager-addons-218885
	
	
	==> coredns [7be47175c23bbde37487f56b66baa578e3060c0189a63f4156cdba80a73738ae] <==
	Trace[220166093]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:21:35.643)
	Trace[220166093]: [30.000976871s] [30.000976871s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37772 - 57440 "HINFO IN 3713161987249755073.4462496746838402409. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019354444s
	[INFO] 10.244.0.7:47836 - 60886 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000305804s
	[INFO] 10.244.0.7:47836 - 53458 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110924s
	[INFO] 10.244.0.7:54106 - 25653 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000141925s
	[INFO] 10.244.0.7:54106 - 53304 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000103244s
	[INFO] 10.244.0.7:50681 - 41048 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104188s
	[INFO] 10.244.0.7:50681 - 16991 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007015s
	[INFO] 10.244.0.7:52606 - 52990 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072691s
	[INFO] 10.244.0.7:52606 - 39420 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000217542s
	[INFO] 10.244.0.7:60763 - 62338 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000049324s
	[INFO] 10.244.0.7:60763 - 35968 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000041375s
	[INFO] 10.244.0.21:36769 - 28384 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000276065s
	[INFO] 10.244.0.21:57042 - 58510 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000133543s
	[INFO] 10.244.0.21:58980 - 33022 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000095524s
	[INFO] 10.244.0.21:60903 - 3777 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000078686s
	[INFO] 10.244.0.21:36852 - 36641 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00007641s
	[INFO] 10.244.0.21:41780 - 8788 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082769s
	[INFO] 10.244.0.21:33291 - 39478 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000959689s
	[INFO] 10.244.0.21:40331 - 57937 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000913308s
	
	
	==> describe nodes <==
	Name:               addons-218885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-218885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=addons-218885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T18_20_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-218885
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:20:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-218885
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:36:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:34:03 +0000   Tue, 24 Sep 2024 18:20:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:34:03 +0000   Tue, 24 Sep 2024 18:20:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:34:03 +0000   Tue, 24 Sep 2024 18:20:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:34:03 +0000   Tue, 24 Sep 2024 18:20:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    addons-218885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a62f96b82b1423cb3ca4a7e749331c6
	  System UUID:                5a62f96b-82b1-423c-b3ca-4a7e749331c6
	  Boot ID:                    98ef14c8-41cc-4a65-8db8-db6c1413a40a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-6h8qp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  gcp-auth                    gcp-auth-89d5ffd79-b9jr2                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  headlamp                    headlamp-7b5c95b59d-5nkmt                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 coredns-7c65d6cfc9-wbgv9                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-218885                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-218885             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-218885    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-jsjnj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-218885             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-pkzn4          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         15m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node addons-218885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node addons-218885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node addons-218885 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m   kubelet          Node addons-218885 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node addons-218885 event: Registered Node addons-218885 in Controller
	
	
	==> dmesg <==
	[  +6.289241] kauditd_printk_skb: 132 callbacks suppressed
	[ +10.551408] kauditd_printk_skb: 15 callbacks suppressed
	[ +15.866552] kauditd_printk_skb: 11 callbacks suppressed
	[ +13.601209] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.093860] kauditd_printk_skb: 38 callbacks suppressed
	[Sep24 18:22] kauditd_printk_skb: 80 callbacks suppressed
	[  +6.788661] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.792651] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.642866] kauditd_printk_skb: 43 callbacks suppressed
	[  +7.574267] kauditd_printk_skb: 3 callbacks suppressed
	[Sep24 18:23] kauditd_printk_skb: 28 callbacks suppressed
	[Sep24 18:24] kauditd_printk_skb: 28 callbacks suppressed
	[Sep24 18:27] kauditd_printk_skb: 28 callbacks suppressed
	[Sep24 18:30] kauditd_printk_skb: 28 callbacks suppressed
	[ +12.411074] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.529390] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.909274] kauditd_printk_skb: 20 callbacks suppressed
	[Sep24 18:31] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.163684] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.109651] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.771905] kauditd_printk_skb: 6 callbacks suppressed
	[ +10.406957] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.115904] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.233753] kauditd_printk_skb: 16 callbacks suppressed
	[Sep24 18:33] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [01aed06020fea288161b94609946ad26fe5fb9c066d4e615a8ce8107d8e36cb5] <==
	{"level":"info","ts":"2024-09-24T18:22:17.577995Z","caller":"traceutil/trace.go:171","msg":"trace[1652387929] linearizableReadLoop","detail":"{readStateIndex:1137; appliedIndex:1136; }","duration":"205.221262ms","start":"2024-09-24T18:22:17.372753Z","end":"2024-09-24T18:22:17.577975Z","steps":["trace[1652387929] 'read index received'  (duration: 205.01138ms)","trace[1652387929] 'applied index is now lower than readState.Index'  (duration: 209.231µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T18:22:17.578361Z","caller":"traceutil/trace.go:171","msg":"trace[1017875941] transaction","detail":"{read_only:false; response_revision:1105; number_of_response:1; }","duration":"420.198995ms","start":"2024-09-24T18:22:17.158143Z","end":"2024-09-24T18:22:17.578342Z","steps":["trace[1017875941] 'process raft request'  (duration: 419.668298ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:22:17.578543Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T18:22:17.158125Z","time spent":"420.358099ms","remote":"127.0.0.1:37844","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1101 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-24T18:22:17.578615Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.164866ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:22:17.578661Z","caller":"traceutil/trace.go:171","msg":"trace[1038728676] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"104.208566ms","start":"2024-09-24T18:22:17.474443Z","end":"2024-09-24T18:22:17.578651Z","steps":["trace[1038728676] 'agreement among raft nodes before linearized reading'  (duration: 104.147281ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:22:17.578424Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.649301ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:22:17.578829Z","caller":"traceutil/trace.go:171","msg":"trace[2133008651] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"206.08201ms","start":"2024-09-24T18:22:17.372738Z","end":"2024-09-24T18:22:17.578820Z","steps":["trace[2133008651] 'agreement among raft nodes before linearized reading'  (duration: 205.618799ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:30:51.489118Z","caller":"traceutil/trace.go:171","msg":"trace[2026224945] linearizableReadLoop","detail":"{readStateIndex:2186; appliedIndex:2185; }","duration":"263.239917ms","start":"2024-09-24T18:30:51.225863Z","end":"2024-09-24T18:30:51.489103Z","steps":["trace[2026224945] 'read index received'  (duration: 263.063528ms)","trace[2026224945] 'applied index is now lower than readState.Index'  (duration: 175.847µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T18:30:51.489403Z","caller":"traceutil/trace.go:171","msg":"trace[588894328] transaction","detail":"{read_only:false; response_revision:2041; number_of_response:1; }","duration":"264.853791ms","start":"2024-09-24T18:30:51.224537Z","end":"2024-09-24T18:30:51.489390Z","steps":["trace[588894328] 'process raft request'  (duration: 264.431123ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:30:51.489597Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.718428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/registry-test.17f841a5c5f6a88e\" ","response":"range_response_count:1 size:727"}
	{"level":"info","ts":"2024-09-24T18:30:51.489618Z","caller":"traceutil/trace.go:171","msg":"trace[141298160] range","detail":"{range_begin:/registry/events/default/registry-test.17f841a5c5f6a88e; range_end:; response_count:1; response_revision:2041; }","duration":"263.752191ms","start":"2024-09-24T18:30:51.225860Z","end":"2024-09-24T18:30:51.489612Z","steps":["trace[141298160] 'agreement among raft nodes before linearized reading'  (duration: 263.661857ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:30:51.489708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.1176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:30:51.489721Z","caller":"traceutil/trace.go:171","msg":"trace[905351139] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2041; }","duration":"149.1321ms","start":"2024-09-24T18:30:51.340585Z","end":"2024-09-24T18:30:51.489717Z","steps":["trace[905351139] 'agreement among raft nodes before linearized reading'  (duration: 149.10741ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:30:52.421183Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1524}
	{"level":"info","ts":"2024-09-24T18:30:52.455466Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1524,"took":"33.86366ms","hash":2015250619,"current-db-size-bytes":6524928,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3493888,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-24T18:30:52.455576Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2015250619,"revision":1524,"compact-revision":-1}
	{"level":"info","ts":"2024-09-24T18:31:11.923147Z","caller":"traceutil/trace.go:171","msg":"trace[1594584679] linearizableReadLoop","detail":"{readStateIndex:2418; appliedIndex:2417; }","duration":"180.184095ms","start":"2024-09-24T18:31:11.742947Z","end":"2024-09-24T18:31:11.923131Z","steps":["trace[1594584679] 'read index received'  (duration: 179.421282ms)","trace[1594584679] 'applied index is now lower than readState.Index'  (duration: 762.207µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T18:31:11.923267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.306652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-external-health-monitor-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:31:11.923304Z","caller":"traceutil/trace.go:171","msg":"trace[669207299] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-external-health-monitor-controller; range_end:; response_count:0; response_revision:2266; }","duration":"180.352098ms","start":"2024-09-24T18:31:11.742942Z","end":"2024-09-24T18:31:11.923294Z","steps":["trace[669207299] 'agreement among raft nodes before linearized reading'  (duration: 180.263747ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:31:11.923415Z","caller":"traceutil/trace.go:171","msg":"trace[1440776756] transaction","detail":"{read_only:false; response_revision:2266; number_of_response:1; }","duration":"324.268386ms","start":"2024-09-24T18:31:11.599140Z","end":"2024-09-24T18:31:11.923409Z","steps":["trace[1440776756] 'process raft request'  (duration: 323.264023ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:31:11.923487Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T18:31:11.599123Z","time spent":"324.320186ms","remote":"127.0.0.1:38080","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":696,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/csinodes/addons-218885\" mod_revision:1071 > success:<request_put:<key:\"/registry/csinodes/addons-218885\" value_size:656 >> failure:<request_range:<key:\"/registry/csinodes/addons-218885\" > >"}
	{"level":"info","ts":"2024-09-24T18:31:56.526515Z","caller":"traceutil/trace.go:171","msg":"trace[1426180008] transaction","detail":"{read_only:false; response_revision:2458; number_of_response:1; }","duration":"162.452632ms","start":"2024-09-24T18:31:56.364044Z","end":"2024-09-24T18:31:56.526496Z","steps":["trace[1426180008] 'process raft request'  (duration: 162.338027ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:35:52.428466Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2051}
	{"level":"info","ts":"2024-09-24T18:35:52.447127Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2051,"took":"18.141084ms","hash":1940376252,"current-db-size-bytes":6524928,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":4579328,"current-db-size-in-use":"4.6 MB"}
	{"level":"info","ts":"2024-09-24T18:35:52.447219Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1940376252,"revision":2051,"compact-revision":1524}
	
	
	==> gcp-auth [c303d9afee770ae2eda2d8cb9e029e10d46565d5f87e1aacf30f6dad3e3d41cd] <==
	2024/09/24 18:22:24 Ready to write response ...
	2024/09/24 18:22:24 Ready to marshal response ...
	2024/09/24 18:22:24 Ready to write response ...
	2024/09/24 18:30:36 Ready to marshal response ...
	2024/09/24 18:30:36 Ready to write response ...
	2024/09/24 18:30:45 Ready to marshal response ...
	2024/09/24 18:30:45 Ready to write response ...
	2024/09/24 18:30:50 Ready to marshal response ...
	2024/09/24 18:30:50 Ready to write response ...
	2024/09/24 18:30:50 Ready to marshal response ...
	2024/09/24 18:30:50 Ready to write response ...
	2024/09/24 18:31:02 Ready to marshal response ...
	2024/09/24 18:31:02 Ready to write response ...
	2024/09/24 18:31:03 Ready to marshal response ...
	2024/09/24 18:31:03 Ready to write response ...
	2024/09/24 18:31:23 Ready to marshal response ...
	2024/09/24 18:31:23 Ready to write response ...
	2024/09/24 18:31:45 Ready to marshal response ...
	2024/09/24 18:31:45 Ready to write response ...
	2024/09/24 18:31:45 Ready to marshal response ...
	2024/09/24 18:31:45 Ready to write response ...
	2024/09/24 18:31:45 Ready to marshal response ...
	2024/09/24 18:31:45 Ready to write response ...
	2024/09/24 18:33:44 Ready to marshal response ...
	2024/09/24 18:33:44 Ready to write response ...
	
	
	==> kernel <==
	 18:36:34 up 16 min,  0 users,  load average: 0.12, 0.44, 0.46
	Linux addons-218885 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b45900bdb84120d2b0e5a5dcb15a77d3adc41436c4a8c297983d2dc3c3e33a93] <==
	I0924 18:31:17.549570       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0924 18:31:17.551996       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0924 18:31:17.583859       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0924 18:31:17.583989       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0924 18:31:17.635304       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0924 18:31:17.635349       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0924 18:31:18.584648       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0924 18:31:18.636057       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0924 18:31:18.663553       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E0924 18:31:19.176932       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0924 18:31:23.694509       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0924 18:31:23.876326       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.166.187"}
	E0924 18:31:24.791416       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:25.798413       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:26.805249       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:27.812198       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:28.819268       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:29.826573       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:30.833568       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:31.840586       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:32.848358       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:33.854844       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0924 18:31:45.893080       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.75.91"}
	I0924 18:33:45.006000       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.155.188"}
	E0924 18:33:46.752766       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [176b7e7ab3b8a771f88df4a1857f2fe15185c86635661cfc3d36f9a276a729de] <==
	W0924 18:34:25.899354       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:34:25.899467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:34:35.929379       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:34:35.929486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:34:37.216688       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:34:37.216849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:34:57.175733       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:34:57.175845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:35:25.306504       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:35:25.306569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:35:29.095460       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:35:29.095518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:35:29.177983       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:35:29.178035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:35:31.234247       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:35:31.234365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:36:00.179749       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:36:00.179859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:36:06.892756       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:36:06.892864       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:36:07.130056       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:36:07.130106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:36:21.339457       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:36:21.339579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0924 18:36:32.893503       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="4.074µs"
	
	
	==> kube-proxy [05055f26daa39aed87cf7d873b1526912f0d56ac562bd79c217b2c5c135531c3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 18:21:04.310826       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 18:21:04.382306       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.215"]
	E0924 18:21:04.382374       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 18:21:05.227657       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 18:21:05.227715       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 18:21:05.227740       1 server_linux.go:169] "Using iptables Proxier"
	I0924 18:21:05.641037       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 18:21:05.641385       1 server.go:483] "Version info" version="v1.31.1"
	I0924 18:21:05.641397       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:21:05.651378       1 config.go:199] "Starting service config controller"
	I0924 18:21:05.651407       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 18:21:05.651432       1 config.go:105] "Starting endpoint slice config controller"
	I0924 18:21:05.651436       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 18:21:05.651985       1 config.go:328] "Starting node config controller"
	I0924 18:21:05.651993       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 18:21:05.751639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 18:21:05.751676       1 shared_informer.go:320] Caches are synced for service config
	I0924 18:21:05.760267       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5872d2d84daecbb7286168900d232aaa3de8d6e2a7efd42d1a21e79e7716fbef] <==
	W0924 18:20:53.710329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 18:20:53.710356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:53.710416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 18:20:53.710442       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:53.710520       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 18:20:53.710546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:53.712017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 18:20:53.712081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.523135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 18:20:54.523250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.558496       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 18:20:54.558598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.602148       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 18:20:54.602194       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 18:20:54.615690       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 18:20:54.616117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.623597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0924 18:20:54.623684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.642634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 18:20:54.643012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.652972       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 18:20:54.653082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:54.764823       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0924 18:20:54.764896       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0924 18:20:57.293683       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 18:35:56 addons-218885 kubelet[1212]: E0924 18:35:56.237051    1212 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 18:35:56 addons-218885 kubelet[1212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 18:35:56 addons-218885 kubelet[1212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 18:35:56 addons-218885 kubelet[1212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 18:35:56 addons-218885 kubelet[1212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 18:35:56 addons-218885 kubelet[1212]: E0924 18:35:56.934273    1212 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202956933792947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:35:56 addons-218885 kubelet[1212]: E0924 18:35:56.934341    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202956933792947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:35:59 addons-218885 kubelet[1212]: E0924 18:35:59.219379    1212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b78ade15-29d4-44d6-bef8-3a957b847bb0"
	Sep 24 18:36:06 addons-218885 kubelet[1212]: E0924 18:36:06.937240    1212 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202966936853567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:36:06 addons-218885 kubelet[1212]: E0924 18:36:06.937275    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202966936853567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:36:11 addons-218885 kubelet[1212]: E0924 18:36:11.218371    1212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b78ade15-29d4-44d6-bef8-3a957b847bb0"
	Sep 24 18:36:15 addons-218885 kubelet[1212]: I0924 18:36:15.218277    1212 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7c65d6cfc9-wbgv9" secret="" err="secret \"gcp-auth\" not found"
	Sep 24 18:36:16 addons-218885 kubelet[1212]: E0924 18:36:16.939333    1212 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202976938868777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:36:16 addons-218885 kubelet[1212]: E0924 18:36:16.939372    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202976938868777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:36:23 addons-218885 kubelet[1212]: E0924 18:36:23.219343    1212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b78ade15-29d4-44d6-bef8-3a957b847bb0"
	Sep 24 18:36:26 addons-218885 kubelet[1212]: E0924 18:36:26.941284    1212 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202986940944608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:36:26 addons-218885 kubelet[1212]: E0924 18:36:26.941320    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727202986940944608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:36:32 addons-218885 kubelet[1212]: I0924 18:36:32.909641    1212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-6h8qp" podStartSLOduration=166.827833071 podStartE2EDuration="2m48.909624248s" podCreationTimestamp="2024-09-24 18:33:44 +0000 UTC" firstStartedPulling="2024-09-24 18:33:45.350969734 +0000 UTC m=+769.282604225" lastFinishedPulling="2024-09-24 18:33:47.4327609 +0000 UTC m=+771.364395402" observedRunningTime="2024-09-24 18:33:48.057048759 +0000 UTC m=+771.988683280" watchObservedRunningTime="2024-09-24 18:36:32.909624248 +0000 UTC m=+936.841258757"
	Sep 24 18:36:34 addons-218885 kubelet[1212]: E0924 18:36:34.220411    1212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b78ade15-29d4-44d6-bef8-3a957b847bb0"
	Sep 24 18:36:34 addons-218885 kubelet[1212]: I0924 18:36:34.282251    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r558t\" (UniqueName: \"kubernetes.io/projected/65ed5b0c-3307-4c48-b8dc-666848d353fc-kube-api-access-r558t\") pod \"65ed5b0c-3307-4c48-b8dc-666848d353fc\" (UID: \"65ed5b0c-3307-4c48-b8dc-666848d353fc\") "
	Sep 24 18:36:34 addons-218885 kubelet[1212]: I0924 18:36:34.282307    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/65ed5b0c-3307-4c48-b8dc-666848d353fc-tmp-dir\") pod \"65ed5b0c-3307-4c48-b8dc-666848d353fc\" (UID: \"65ed5b0c-3307-4c48-b8dc-666848d353fc\") "
	Sep 24 18:36:34 addons-218885 kubelet[1212]: I0924 18:36:34.283361    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65ed5b0c-3307-4c48-b8dc-666848d353fc-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "65ed5b0c-3307-4c48-b8dc-666848d353fc" (UID: "65ed5b0c-3307-4c48-b8dc-666848d353fc"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 24 18:36:34 addons-218885 kubelet[1212]: I0924 18:36:34.285969    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65ed5b0c-3307-4c48-b8dc-666848d353fc-kube-api-access-r558t" (OuterVolumeSpecName: "kube-api-access-r558t") pod "65ed5b0c-3307-4c48-b8dc-666848d353fc" (UID: "65ed5b0c-3307-4c48-b8dc-666848d353fc"). InnerVolumeSpecName "kube-api-access-r558t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 24 18:36:34 addons-218885 kubelet[1212]: I0924 18:36:34.382970    1212 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r558t\" (UniqueName: \"kubernetes.io/projected/65ed5b0c-3307-4c48-b8dc-666848d353fc-kube-api-access-r558t\") on node \"addons-218885\" DevicePath \"\""
	Sep 24 18:36:34 addons-218885 kubelet[1212]: I0924 18:36:34.383008    1212 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/65ed5b0c-3307-4c48-b8dc-666848d353fc-tmp-dir\") on node \"addons-218885\" DevicePath \"\""
	
	
	==> storage-provisioner [892df4e49ab85492657cc5d1c8404bc9bbdf9a850ff6a877c13f1f0bde448d34] <==
	I0924 18:21:07.075958       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 18:21:07.205644       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 18:21:07.205811       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 18:21:07.484816       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 18:21:07.489301       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-218885_6633e999-b40f-40f8-8839-f401b4cb474f!
	I0924 18:21:07.503782       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa2296f7-92f6-4a3d-97ef-5ea843d9a5be", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-218885_6633e999-b40f-40f8-8839-f401b4cb474f became leader
	I0924 18:21:07.594117       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-218885_6633e999-b40f-40f8-8839-f401b4cb474f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-218885 -n addons-218885
helpers_test.go:261: (dbg) Run:  kubectl --context addons-218885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-218885 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-218885 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-218885/192.168.39.215
	Start Time:       Tue, 24 Sep 2024 18:22:24 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z5n6g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-z5n6g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  14m                  default-scheduler  Successfully assigned default/busybox to addons-218885
	  Normal   Pulling    12m (x4 over 14m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 14m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 14m)    kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 14m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m3s (x43 over 14m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (368.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 node stop m02 -v=7 --alsologtostderr
E0924 18:45:10.285042   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:45:30.766323   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:46:11.728208   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-685475 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.446870075s)

                                                
                                                
-- stdout --
	* Stopping node "ha-685475-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 18:45:02.168708   26904 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:45:02.168876   26904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:45:02.168887   26904 out.go:358] Setting ErrFile to fd 2...
	I0924 18:45:02.168892   26904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:45:02.169145   26904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 18:45:02.169469   26904 mustload.go:65] Loading cluster: ha-685475
	I0924 18:45:02.170018   26904 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:45:02.170036   26904 stop.go:39] StopHost: ha-685475-m02
	I0924 18:45:02.170558   26904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:45:02.170613   26904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:45:02.186248   26904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44193
	I0924 18:45:02.186710   26904 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:45:02.187247   26904 main.go:141] libmachine: Using API Version  1
	I0924 18:45:02.187274   26904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:45:02.187585   26904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:45:02.189885   26904 out.go:177] * Stopping node "ha-685475-m02"  ...
	I0924 18:45:02.191323   26904 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0924 18:45:02.191359   26904 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:45:02.191542   26904 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0924 18:45:02.191593   26904 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:45:02.194347   26904 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:45:02.194739   26904 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:45:02.194767   26904 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:45:02.194885   26904 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:45:02.195038   26904 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:45:02.195161   26904 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:45:02.195295   26904 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa Username:docker}
	I0924 18:45:02.277868   26904 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0924 18:45:02.330256   26904 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0924 18:45:02.384683   26904 main.go:141] libmachine: Stopping "ha-685475-m02"...
	I0924 18:45:02.384716   26904 main.go:141] libmachine: (ha-685475-m02) Calling .GetState
	I0924 18:45:02.386288   26904 main.go:141] libmachine: (ha-685475-m02) Calling .Stop
	I0924 18:45:02.389736   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 0/120
	I0924 18:45:03.391085   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 1/120
	I0924 18:45:04.393263   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 2/120
	I0924 18:45:05.394662   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 3/120
	I0924 18:45:06.395887   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 4/120
	I0924 18:45:07.398199   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 5/120
	I0924 18:45:08.399499   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 6/120
	I0924 18:45:09.400758   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 7/120
	I0924 18:45:10.402519   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 8/120
	I0924 18:45:11.403813   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 9/120
	I0924 18:45:12.405493   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 10/120
	I0924 18:45:13.406882   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 11/120
	I0924 18:45:14.408108   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 12/120
	I0924 18:45:15.409545   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 13/120
	I0924 18:45:16.410720   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 14/120
	I0924 18:45:17.413036   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 15/120
	I0924 18:45:18.414329   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 16/120
	I0924 18:45:19.415824   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 17/120
	I0924 18:45:20.417228   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 18/120
	I0924 18:45:21.418318   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 19/120
	I0924 18:45:22.420679   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 20/120
	I0924 18:45:23.422294   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 21/120
	I0924 18:45:24.423576   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 22/120
	I0924 18:45:25.425017   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 23/120
	I0924 18:45:26.426444   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 24/120
	I0924 18:45:27.428056   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 25/120
	I0924 18:45:28.429966   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 26/120
	I0924 18:45:29.431436   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 27/120
	I0924 18:45:30.432749   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 28/120
	I0924 18:45:31.433970   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 29/120
	I0924 18:45:32.435820   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 30/120
	I0924 18:45:33.437196   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 31/120
	I0924 18:45:34.438538   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 32/120
	I0924 18:45:35.439998   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 33/120
	I0924 18:45:36.441228   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 34/120
	I0924 18:45:37.443527   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 35/120
	I0924 18:45:38.445244   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 36/120
	I0924 18:45:39.446546   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 37/120
	I0924 18:45:40.447832   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 38/120
	I0924 18:45:41.449176   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 39/120
	I0924 18:45:42.451025   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 40/120
	I0924 18:45:43.453244   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 41/120
	I0924 18:45:44.454515   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 42/120
	I0924 18:45:45.455672   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 43/120
	I0924 18:45:46.457358   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 44/120
	I0924 18:45:47.458970   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 45/120
	I0924 18:45:48.461403   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 46/120
	I0924 18:45:49.462610   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 47/120
	I0924 18:45:50.464745   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 48/120
	I0924 18:45:51.465937   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 49/120
	I0924 18:45:52.467500   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 50/120
	I0924 18:45:53.469346   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 51/120
	I0924 18:45:54.470754   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 52/120
	I0924 18:45:55.472407   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 53/120
	I0924 18:45:56.473813   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 54/120
	I0924 18:45:57.475916   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 55/120
	I0924 18:45:58.477191   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 56/120
	I0924 18:45:59.478526   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 57/120
	I0924 18:46:00.480444   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 58/120
	I0924 18:46:01.481834   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 59/120
	I0924 18:46:02.483134   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 60/120
	I0924 18:46:03.485316   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 61/120
	I0924 18:46:04.486842   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 62/120
	I0924 18:46:05.488232   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 63/120
	I0924 18:46:06.489444   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 64/120
	I0924 18:46:07.491297   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 65/120
	I0924 18:46:08.492668   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 66/120
	I0924 18:46:09.493989   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 67/120
	I0924 18:46:10.495365   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 68/120
	I0924 18:46:11.496618   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 69/120
	I0924 18:46:12.498213   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 70/120
	I0924 18:46:13.499535   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 71/120
	I0924 18:46:14.500864   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 72/120
	I0924 18:46:15.502286   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 73/120
	I0924 18:46:16.503575   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 74/120
	I0924 18:46:17.506215   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 75/120
	I0924 18:46:18.507447   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 76/120
	I0924 18:46:19.508768   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 77/120
	I0924 18:46:20.510079   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 78/120
	I0924 18:46:21.511314   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 79/120
	I0924 18:46:22.513182   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 80/120
	I0924 18:46:23.514655   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 81/120
	I0924 18:46:24.516011   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 82/120
	I0924 18:46:25.517270   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 83/120
	I0924 18:46:26.518719   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 84/120
	I0924 18:46:27.520658   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 85/120
	I0924 18:46:28.521793   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 86/120
	I0924 18:46:29.523163   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 87/120
	I0924 18:46:30.525140   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 88/120
	I0924 18:46:31.526559   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 89/120
	I0924 18:46:32.528717   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 90/120
	I0924 18:46:33.530068   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 91/120
	I0924 18:46:34.531573   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 92/120
	I0924 18:46:35.532893   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 93/120
	I0924 18:46:36.534401   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 94/120
	I0924 18:46:37.536319   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 95/120
	I0924 18:46:38.537634   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 96/120
	I0924 18:46:39.538962   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 97/120
	I0924 18:46:40.540169   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 98/120
	I0924 18:46:41.541443   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 99/120
	I0924 18:46:42.543513   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 100/120
	I0924 18:46:43.545485   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 101/120
	I0924 18:46:44.547086   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 102/120
	I0924 18:46:45.548452   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 103/120
	I0924 18:46:46.549799   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 104/120
	I0924 18:46:47.551783   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 105/120
	I0924 18:46:48.553213   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 106/120
	I0924 18:46:49.554517   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 107/120
	I0924 18:46:50.555806   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 108/120
	I0924 18:46:51.557187   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 109/120
	I0924 18:46:52.559191   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 110/120
	I0924 18:46:53.561259   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 111/120
	I0924 18:46:54.562396   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 112/120
	I0924 18:46:55.563761   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 113/120
	I0924 18:46:56.565098   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 114/120
	I0924 18:46:57.566915   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 115/120
	I0924 18:46:58.568164   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 116/120
	I0924 18:46:59.569573   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 117/120
	I0924 18:47:00.571094   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 118/120
	I0924 18:47:01.572421   26904 main.go:141] libmachine: (ha-685475-m02) Waiting for machine to stop 119/120
	I0924 18:47:02.573543   26904 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0924 18:47:02.573695   26904 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-685475 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Done: out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr: (18.724911826s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-685475 -n ha-685475
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-685475 logs -n 25: (1.247722306s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile399016322/001/cp-test_ha-685475-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475:/home/docker/cp-test_ha-685475-m03_ha-685475.txt                      |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475 sudo cat                                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /home/docker/cp-test_ha-685475-m03_ha-685475.txt                                |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m02:/home/docker/cp-test_ha-685475-m03_ha-685475-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m02 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /home/docker/cp-test_ha-685475-m03_ha-685475-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m04:/home/docker/cp-test_ha-685475-m03_ha-685475-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m04 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /home/docker/cp-test_ha-685475-m03_ha-685475-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-685475 cp testdata/cp-test.txt                                               | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile399016322/001/cp-test_ha-685475-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475:/home/docker/cp-test_ha-685475-m04_ha-685475.txt                      |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475 sudo cat                                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-685475-m04_ha-685475.txt                                |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m02:/home/docker/cp-test_ha-685475-m04_ha-685475-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m02 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-685475-m04_ha-685475-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m03:/home/docker/cp-test_ha-685475-m04_ha-685475-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m03 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-685475-m04_ha-685475-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-685475 node stop m02 -v=7                                                    | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:40:35
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:40:35.618652   22837 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:40:35.618943   22837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:40:35.618954   22837 out.go:358] Setting ErrFile to fd 2...
	I0924 18:40:35.618959   22837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:40:35.619154   22837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 18:40:35.619730   22837 out.go:352] Setting JSON to false
	I0924 18:40:35.620645   22837 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1387,"bootTime":1727201849,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:40:35.620729   22837 start.go:139] virtualization: kvm guest
	I0924 18:40:35.622855   22837 out.go:177] * [ha-685475] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 18:40:35.624385   22837 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:40:35.624401   22837 notify.go:220] Checking for updates...
	I0924 18:40:35.627290   22837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:40:35.628609   22837 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:40:35.629977   22837 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:40:35.631349   22837 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 18:40:35.632638   22837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:40:35.634090   22837 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:40:35.670308   22837 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 18:40:35.671877   22837 start.go:297] selected driver: kvm2
	I0924 18:40:35.671905   22837 start.go:901] validating driver "kvm2" against <nil>
	I0924 18:40:35.671922   22837 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:40:35.672818   22837 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:40:35.672911   22837 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 18:40:35.688646   22837 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 18:40:35.688691   22837 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 18:40:35.688908   22837 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:40:35.688933   22837 cni.go:84] Creating CNI manager for ""
	I0924 18:40:35.688955   22837 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0924 18:40:35.688963   22837 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0924 18:40:35.689004   22837 start.go:340] cluster config:
	{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0924 18:40:35.689084   22837 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:40:35.691077   22837 out.go:177] * Starting "ha-685475" primary control-plane node in "ha-685475" cluster
	I0924 18:40:35.692675   22837 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:40:35.692727   22837 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 18:40:35.692737   22837 cache.go:56] Caching tarball of preloaded images
	I0924 18:40:35.692807   22837 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 18:40:35.692817   22837 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 18:40:35.693129   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:40:35.693148   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json: {Name:mkf04021428036cd37ddc8fca7772aaba780fa7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:40:35.693278   22837 start.go:360] acquireMachinesLock for ha-685475: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 18:40:35.693307   22837 start.go:364] duration metric: took 16.26µs to acquireMachinesLock for "ha-685475"
	I0924 18:40:35.693323   22837 start.go:93] Provisioning new machine with config: &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:40:35.693388   22837 start.go:125] createHost starting for "" (driver="kvm2")
	I0924 18:40:35.695217   22837 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 18:40:35.695377   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:40:35.695407   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:40:35.709830   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I0924 18:40:35.710273   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:40:35.710759   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:40:35.710782   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:40:35.711106   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:40:35.711266   22837 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:40:35.711382   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:40:35.711548   22837 start.go:159] libmachine.API.Create for "ha-685475" (driver="kvm2")
	I0924 18:40:35.711571   22837 client.go:168] LocalClient.Create starting
	I0924 18:40:35.711598   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem
	I0924 18:40:35.711635   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:40:35.711648   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:40:35.711694   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem
	I0924 18:40:35.711713   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:40:35.711724   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:40:35.711739   22837 main.go:141] libmachine: Running pre-create checks...
	I0924 18:40:35.711747   22837 main.go:141] libmachine: (ha-685475) Calling .PreCreateCheck
	I0924 18:40:35.712023   22837 main.go:141] libmachine: (ha-685475) Calling .GetConfigRaw
	I0924 18:40:35.712397   22837 main.go:141] libmachine: Creating machine...
	I0924 18:40:35.712411   22837 main.go:141] libmachine: (ha-685475) Calling .Create
	I0924 18:40:35.712547   22837 main.go:141] libmachine: (ha-685475) Creating KVM machine...
	I0924 18:40:35.713673   22837 main.go:141] libmachine: (ha-685475) DBG | found existing default KVM network
	I0924 18:40:35.714359   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:35.714247   22860 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000121a50}
	I0924 18:40:35.714400   22837 main.go:141] libmachine: (ha-685475) DBG | created network xml: 
	I0924 18:40:35.714421   22837 main.go:141] libmachine: (ha-685475) DBG | <network>
	I0924 18:40:35.714434   22837 main.go:141] libmachine: (ha-685475) DBG |   <name>mk-ha-685475</name>
	I0924 18:40:35.714443   22837 main.go:141] libmachine: (ha-685475) DBG |   <dns enable='no'/>
	I0924 18:40:35.714462   22837 main.go:141] libmachine: (ha-685475) DBG |   
	I0924 18:40:35.714493   22837 main.go:141] libmachine: (ha-685475) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0924 18:40:35.714508   22837 main.go:141] libmachine: (ha-685475) DBG |     <dhcp>
	I0924 18:40:35.714524   22837 main.go:141] libmachine: (ha-685475) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0924 18:40:35.714536   22837 main.go:141] libmachine: (ha-685475) DBG |     </dhcp>
	I0924 18:40:35.714545   22837 main.go:141] libmachine: (ha-685475) DBG |   </ip>
	I0924 18:40:35.714555   22837 main.go:141] libmachine: (ha-685475) DBG |   
	I0924 18:40:35.714563   22837 main.go:141] libmachine: (ha-685475) DBG | </network>
	I0924 18:40:35.714575   22837 main.go:141] libmachine: (ha-685475) DBG | 
	I0924 18:40:35.719712   22837 main.go:141] libmachine: (ha-685475) DBG | trying to create private KVM network mk-ha-685475 192.168.39.0/24...
	I0924 18:40:35.786088   22837 main.go:141] libmachine: (ha-685475) DBG | private KVM network mk-ha-685475 192.168.39.0/24 created
	I0924 18:40:35.786128   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:35.786012   22860 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:40:35.786138   22837 main.go:141] libmachine: (ha-685475) Setting up store path in /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475 ...
	I0924 18:40:35.786155   22837 main.go:141] libmachine: (ha-685475) Building disk image from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 18:40:35.786173   22837 main.go:141] libmachine: (ha-685475) Downloading /home/jenkins/minikube-integration/19700-3751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 18:40:36.040941   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:36.040806   22860 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa...
	I0924 18:40:36.268625   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:36.268496   22860 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/ha-685475.rawdisk...
	I0924 18:40:36.268672   22837 main.go:141] libmachine: (ha-685475) DBG | Writing magic tar header
	I0924 18:40:36.268724   22837 main.go:141] libmachine: (ha-685475) DBG | Writing SSH key tar header
	I0924 18:40:36.268756   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:36.268615   22860 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475 ...
	I0924 18:40:36.268769   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475 (perms=drwx------)
	I0924 18:40:36.268781   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines (perms=drwxr-xr-x)
	I0924 18:40:36.268787   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube (perms=drwxr-xr-x)
	I0924 18:40:36.268796   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751 (perms=drwxrwxr-x)
	I0924 18:40:36.268804   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 18:40:36.268835   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475
	I0924 18:40:36.268855   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines
	I0924 18:40:36.268865   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 18:40:36.268883   22837 main.go:141] libmachine: (ha-685475) Creating domain...
	I0924 18:40:36.268895   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:40:36.268900   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751
	I0924 18:40:36.268908   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 18:40:36.268917   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins
	I0924 18:40:36.268929   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home
	I0924 18:40:36.268937   22837 main.go:141] libmachine: (ha-685475) DBG | Skipping /home - not owner
	I0924 18:40:36.269970   22837 main.go:141] libmachine: (ha-685475) define libvirt domain using xml: 
	I0924 18:40:36.270004   22837 main.go:141] libmachine: (ha-685475) <domain type='kvm'>
	I0924 18:40:36.270014   22837 main.go:141] libmachine: (ha-685475)   <name>ha-685475</name>
	I0924 18:40:36.270022   22837 main.go:141] libmachine: (ha-685475)   <memory unit='MiB'>2200</memory>
	I0924 18:40:36.270031   22837 main.go:141] libmachine: (ha-685475)   <vcpu>2</vcpu>
	I0924 18:40:36.270041   22837 main.go:141] libmachine: (ha-685475)   <features>
	I0924 18:40:36.270049   22837 main.go:141] libmachine: (ha-685475)     <acpi/>
	I0924 18:40:36.270059   22837 main.go:141] libmachine: (ha-685475)     <apic/>
	I0924 18:40:36.270084   22837 main.go:141] libmachine: (ha-685475)     <pae/>
	I0924 18:40:36.270105   22837 main.go:141] libmachine: (ha-685475)     
	I0924 18:40:36.270115   22837 main.go:141] libmachine: (ha-685475)   </features>
	I0924 18:40:36.270123   22837 main.go:141] libmachine: (ha-685475)   <cpu mode='host-passthrough'>
	I0924 18:40:36.270131   22837 main.go:141] libmachine: (ha-685475)   
	I0924 18:40:36.270135   22837 main.go:141] libmachine: (ha-685475)   </cpu>
	I0924 18:40:36.270139   22837 main.go:141] libmachine: (ha-685475)   <os>
	I0924 18:40:36.270143   22837 main.go:141] libmachine: (ha-685475)     <type>hvm</type>
	I0924 18:40:36.270148   22837 main.go:141] libmachine: (ha-685475)     <boot dev='cdrom'/>
	I0924 18:40:36.270152   22837 main.go:141] libmachine: (ha-685475)     <boot dev='hd'/>
	I0924 18:40:36.270157   22837 main.go:141] libmachine: (ha-685475)     <bootmenu enable='no'/>
	I0924 18:40:36.270162   22837 main.go:141] libmachine: (ha-685475)   </os>
	I0924 18:40:36.270168   22837 main.go:141] libmachine: (ha-685475)   <devices>
	I0924 18:40:36.270179   22837 main.go:141] libmachine: (ha-685475)     <disk type='file' device='cdrom'>
	I0924 18:40:36.270191   22837 main.go:141] libmachine: (ha-685475)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/boot2docker.iso'/>
	I0924 18:40:36.270215   22837 main.go:141] libmachine: (ha-685475)       <target dev='hdc' bus='scsi'/>
	I0924 18:40:36.270223   22837 main.go:141] libmachine: (ha-685475)       <readonly/>
	I0924 18:40:36.270227   22837 main.go:141] libmachine: (ha-685475)     </disk>
	I0924 18:40:36.270232   22837 main.go:141] libmachine: (ha-685475)     <disk type='file' device='disk'>
	I0924 18:40:36.270240   22837 main.go:141] libmachine: (ha-685475)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 18:40:36.270255   22837 main.go:141] libmachine: (ha-685475)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/ha-685475.rawdisk'/>
	I0924 18:40:36.270268   22837 main.go:141] libmachine: (ha-685475)       <target dev='hda' bus='virtio'/>
	I0924 18:40:36.270285   22837 main.go:141] libmachine: (ha-685475)     </disk>
	I0924 18:40:36.270298   22837 main.go:141] libmachine: (ha-685475)     <interface type='network'>
	I0924 18:40:36.270315   22837 main.go:141] libmachine: (ha-685475)       <source network='mk-ha-685475'/>
	I0924 18:40:36.270332   22837 main.go:141] libmachine: (ha-685475)       <model type='virtio'/>
	I0924 18:40:36.270343   22837 main.go:141] libmachine: (ha-685475)     </interface>
	I0924 18:40:36.270354   22837 main.go:141] libmachine: (ha-685475)     <interface type='network'>
	I0924 18:40:36.270365   22837 main.go:141] libmachine: (ha-685475)       <source network='default'/>
	I0924 18:40:36.270375   22837 main.go:141] libmachine: (ha-685475)       <model type='virtio'/>
	I0924 18:40:36.270384   22837 main.go:141] libmachine: (ha-685475)     </interface>
	I0924 18:40:36.270394   22837 main.go:141] libmachine: (ha-685475)     <serial type='pty'>
	I0924 18:40:36.270402   22837 main.go:141] libmachine: (ha-685475)       <target port='0'/>
	I0924 18:40:36.270412   22837 main.go:141] libmachine: (ha-685475)     </serial>
	I0924 18:40:36.270421   22837 main.go:141] libmachine: (ha-685475)     <console type='pty'>
	I0924 18:40:36.270430   22837 main.go:141] libmachine: (ha-685475)       <target type='serial' port='0'/>
	I0924 18:40:36.270438   22837 main.go:141] libmachine: (ha-685475)     </console>
	I0924 18:40:36.270445   22837 main.go:141] libmachine: (ha-685475)     <rng model='virtio'>
	I0924 18:40:36.270455   22837 main.go:141] libmachine: (ha-685475)       <backend model='random'>/dev/random</backend>
	I0924 18:40:36.270471   22837 main.go:141] libmachine: (ha-685475)     </rng>
	I0924 18:40:36.270484   22837 main.go:141] libmachine: (ha-685475)     
	I0924 18:40:36.270496   22837 main.go:141] libmachine: (ha-685475)     
	I0924 18:40:36.270507   22837 main.go:141] libmachine: (ha-685475)   </devices>
	I0924 18:40:36.270515   22837 main.go:141] libmachine: (ha-685475) </domain>
	I0924 18:40:36.270524   22837 main.go:141] libmachine: (ha-685475) 
	I0924 18:40:36.274620   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:29:bb:c5 in network default
	I0924 18:40:36.275145   22837 main.go:141] libmachine: (ha-685475) Ensuring networks are active...
	I0924 18:40:36.275164   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:36.275867   22837 main.go:141] libmachine: (ha-685475) Ensuring network default is active
	I0924 18:40:36.276239   22837 main.go:141] libmachine: (ha-685475) Ensuring network mk-ha-685475 is active
	I0924 18:40:36.276892   22837 main.go:141] libmachine: (ha-685475) Getting domain xml...
	I0924 18:40:36.277603   22837 main.go:141] libmachine: (ha-685475) Creating domain...
	I0924 18:40:37.460480   22837 main.go:141] libmachine: (ha-685475) Waiting to get IP...
	I0924 18:40:37.461314   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:37.461739   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:37.461774   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:37.461717   22860 retry.go:31] will retry after 296.388363ms: waiting for machine to come up
	I0924 18:40:37.760304   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:37.760785   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:37.760810   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:37.760740   22860 retry.go:31] will retry after 328.765263ms: waiting for machine to come up
	I0924 18:40:38.091364   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:38.091840   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:38.091866   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:38.091794   22860 retry.go:31] will retry after 475.786926ms: waiting for machine to come up
	I0924 18:40:38.569463   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:38.569893   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:38.569921   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:38.569836   22860 retry.go:31] will retry after 449.224473ms: waiting for machine to come up
	I0924 18:40:39.020465   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:39.020861   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:39.020885   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:39.020825   22860 retry.go:31] will retry after 573.37705ms: waiting for machine to come up
	I0924 18:40:39.595466   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:39.595901   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:39.595920   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:39.595866   22860 retry.go:31] will retry after 888.819714ms: waiting for machine to come up
	I0924 18:40:40.485857   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:40.486194   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:40.486220   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:40.486169   22860 retry.go:31] will retry after 849.565748ms: waiting for machine to come up
	I0924 18:40:41.336920   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:41.337334   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:41.337355   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:41.337299   22860 retry.go:31] will retry after 943.088304ms: waiting for machine to come up
	I0924 18:40:42.282339   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:42.282747   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:42.282769   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:42.282704   22860 retry.go:31] will retry after 1.602523393s: waiting for machine to come up
	I0924 18:40:43.887465   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:43.887909   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:43.887926   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:43.887863   22860 retry.go:31] will retry after 1.565249639s: waiting for machine to come up
	I0924 18:40:45.455849   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:45.456357   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:45.456383   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:45.456304   22860 retry.go:31] will retry after 2.532618475s: waiting for machine to come up
	I0924 18:40:47.991803   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:47.992180   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:47.992208   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:47.992135   22860 retry.go:31] will retry after 2.721738632s: waiting for machine to come up
	I0924 18:40:50.715276   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:50.715664   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:50.715696   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:50.715634   22860 retry.go:31] will retry after 2.97095557s: waiting for machine to come up
	I0924 18:40:53.689583   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:53.689985   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:53.690027   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:53.689963   22860 retry.go:31] will retry after 4.964736548s: waiting for machine to come up
	I0924 18:40:58.657846   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:58.658217   22837 main.go:141] libmachine: (ha-685475) Found IP for machine: 192.168.39.7
	I0924 18:40:58.658231   22837 main.go:141] libmachine: (ha-685475) Reserving static IP address...
	I0924 18:40:58.658245   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has current primary IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:58.658686   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find host DHCP lease matching {name: "ha-685475", mac: "52:54:00:bb:26:52", ip: "192.168.39.7"} in network mk-ha-685475
	I0924 18:40:58.726895   22837 main.go:141] libmachine: (ha-685475) DBG | Getting to WaitForSSH function...
	I0924 18:40:58.726926   22837 main.go:141] libmachine: (ha-685475) Reserved static IP address: 192.168.39.7
	I0924 18:40:58.726937   22837 main.go:141] libmachine: (ha-685475) Waiting for SSH to be available...
	I0924 18:40:58.729433   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:58.729749   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475
	I0924 18:40:58.729778   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find defined IP address of network mk-ha-685475 interface with MAC address 52:54:00:bb:26:52
	I0924 18:40:58.729916   22837 main.go:141] libmachine: (ha-685475) DBG | Using SSH client type: external
	I0924 18:40:58.729941   22837 main.go:141] libmachine: (ha-685475) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa (-rw-------)
	I0924 18:40:58.729969   22837 main.go:141] libmachine: (ha-685475) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:40:58.729980   22837 main.go:141] libmachine: (ha-685475) DBG | About to run SSH command:
	I0924 18:40:58.729993   22837 main.go:141] libmachine: (ha-685475) DBG | exit 0
	I0924 18:40:58.733379   22837 main.go:141] libmachine: (ha-685475) DBG | SSH cmd err, output: exit status 255: 
	I0924 18:40:58.733402   22837 main.go:141] libmachine: (ha-685475) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0924 18:40:58.733413   22837 main.go:141] libmachine: (ha-685475) DBG | command : exit 0
	I0924 18:40:58.733422   22837 main.go:141] libmachine: (ha-685475) DBG | err     : exit status 255
	I0924 18:40:58.733432   22837 main.go:141] libmachine: (ha-685475) DBG | output  : 
	I0924 18:41:01.734078   22837 main.go:141] libmachine: (ha-685475) DBG | Getting to WaitForSSH function...
	I0924 18:41:01.736442   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.736846   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:01.736875   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.736966   22837 main.go:141] libmachine: (ha-685475) DBG | Using SSH client type: external
	I0924 18:41:01.736988   22837 main.go:141] libmachine: (ha-685475) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa (-rw-------)
	I0924 18:41:01.737029   22837 main.go:141] libmachine: (ha-685475) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:41:01.737052   22837 main.go:141] libmachine: (ha-685475) DBG | About to run SSH command:
	I0924 18:41:01.737065   22837 main.go:141] libmachine: (ha-685475) DBG | exit 0
	I0924 18:41:01.858518   22837 main.go:141] libmachine: (ha-685475) DBG | SSH cmd err, output: <nil>: 
	I0924 18:41:01.858812   22837 main.go:141] libmachine: (ha-685475) KVM machine creation complete!
	I0924 18:41:01.859085   22837 main.go:141] libmachine: (ha-685475) Calling .GetConfigRaw
	I0924 18:41:01.859647   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:01.859818   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:01.859970   22837 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 18:41:01.859985   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:41:01.861184   22837 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 18:41:01.861196   22837 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 18:41:01.861201   22837 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 18:41:01.861206   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:01.863734   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.864111   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:01.864137   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.864287   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:01.864470   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:01.864641   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:01.864792   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:01.864958   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:01.865168   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:01.865180   22837 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 18:41:01.965971   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:41:01.965992   22837 main.go:141] libmachine: Detecting the provisioner...
	I0924 18:41:01.965999   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:01.968393   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.968679   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:01.968705   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.968849   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:01.968989   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:01.969127   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:01.969226   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:01.969360   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:01.969511   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:01.969521   22837 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 18:41:02.070902   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 18:41:02.070990   22837 main.go:141] libmachine: found compatible host: buildroot
	I0924 18:41:02.071004   22837 main.go:141] libmachine: Provisioning with buildroot...
	I0924 18:41:02.071015   22837 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:41:02.071246   22837 buildroot.go:166] provisioning hostname "ha-685475"
	I0924 18:41:02.071275   22837 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:41:02.071415   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.074599   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.074996   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.075019   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.075149   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:02.075311   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.075419   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.075520   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:02.075644   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:02.075797   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:02.075808   22837 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-685475 && echo "ha-685475" | sudo tee /etc/hostname
	I0924 18:41:02.191183   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685475
	
	I0924 18:41:02.191206   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.193903   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.194254   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.194277   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.194435   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:02.194612   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.194742   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.194863   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:02.195018   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:02.195214   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:02.195234   22837 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-685475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-685475/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-685475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 18:41:02.306707   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:41:02.306732   22837 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 18:41:02.306752   22837 buildroot.go:174] setting up certificates
	I0924 18:41:02.306763   22837 provision.go:84] configureAuth start
	I0924 18:41:02.306771   22837 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:41:02.307067   22837 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:41:02.309510   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.309793   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.309820   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.309932   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.311757   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.312020   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.312040   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.312160   22837 provision.go:143] copyHostCerts
	I0924 18:41:02.312182   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:41:02.312213   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 18:41:02.312221   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:41:02.312284   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 18:41:02.312357   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:41:02.312374   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 18:41:02.312380   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:41:02.312403   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 18:41:02.312444   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:41:02.312461   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 18:41:02.312467   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:41:02.312487   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 18:41:02.312532   22837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.ha-685475 san=[127.0.0.1 192.168.39.7 ha-685475 localhost minikube]
	I0924 18:41:02.610752   22837 provision.go:177] copyRemoteCerts
	I0924 18:41:02.610810   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 18:41:02.610847   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.613269   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.613544   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.613580   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.613691   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:02.613856   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.614031   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:02.614140   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:02.696690   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 18:41:02.696775   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0924 18:41:02.719028   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 18:41:02.719087   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 18:41:02.740811   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 18:41:02.740889   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 18:41:02.762904   22837 provision.go:87] duration metric: took 456.128009ms to configureAuth
	I0924 18:41:02.762937   22837 buildroot.go:189] setting minikube options for container-runtime
	I0924 18:41:02.763113   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:41:02.763199   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.765836   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.766227   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.766253   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.766382   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:02.766616   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.766752   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.766881   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:02.767012   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:02.767181   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:02.767201   22837 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 18:41:02.983298   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 18:41:02.983327   22837 main.go:141] libmachine: Checking connection to Docker...
	I0924 18:41:02.983336   22837 main.go:141] libmachine: (ha-685475) Calling .GetURL
	I0924 18:41:02.984661   22837 main.go:141] libmachine: (ha-685475) DBG | Using libvirt version 6000000
	I0924 18:41:02.986674   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.986998   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.987035   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.987171   22837 main.go:141] libmachine: Docker is up and running!
	I0924 18:41:02.987184   22837 main.go:141] libmachine: Reticulating splines...
	I0924 18:41:02.987191   22837 client.go:171] duration metric: took 27.275613308s to LocalClient.Create
	I0924 18:41:02.987217   22837 start.go:167] duration metric: took 27.275670931s to libmachine.API.Create "ha-685475"
	I0924 18:41:02.987229   22837 start.go:293] postStartSetup for "ha-685475" (driver="kvm2")
	I0924 18:41:02.987244   22837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:41:02.987264   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:02.987513   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:41:02.987534   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.989371   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.989734   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.989749   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.989938   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:02.990114   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.990358   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:02.990533   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:03.072587   22837 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 18:41:03.076584   22837 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 18:41:03.076617   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 18:41:03.076688   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 18:41:03.076760   22837 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 18:41:03.076772   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /etc/ssl/certs/109492.pem
	I0924 18:41:03.076869   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 18:41:03.085953   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:41:03.108631   22837 start.go:296] duration metric: took 121.38524ms for postStartSetup
	I0924 18:41:03.108689   22837 main.go:141] libmachine: (ha-685475) Calling .GetConfigRaw
	I0924 18:41:03.109239   22837 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:41:03.111776   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.112078   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:03.112107   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.112319   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:41:03.112501   22837 start.go:128] duration metric: took 27.419103166s to createHost
	I0924 18:41:03.112522   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:03.114886   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.115236   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:03.115261   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.115422   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:03.115597   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:03.115736   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:03.115880   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:03.116026   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:03.116220   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:03.116230   22837 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 18:41:03.223401   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727203263.206629374
	
	I0924 18:41:03.223425   22837 fix.go:216] guest clock: 1727203263.206629374
	I0924 18:41:03.223432   22837 fix.go:229] Guest: 2024-09-24 18:41:03.206629374 +0000 UTC Remote: 2024-09-24 18:41:03.112512755 +0000 UTC m=+27.526898013 (delta=94.116619ms)
	I0924 18:41:03.223470   22837 fix.go:200] guest clock delta is within tolerance: 94.116619ms
	I0924 18:41:03.223475   22837 start.go:83] releasing machines lock for "ha-685475", held for 27.53015951s
	I0924 18:41:03.223493   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:03.223794   22837 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:41:03.226346   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.226711   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:03.226738   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.226887   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:03.227337   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:03.227484   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:03.227576   22837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 18:41:03.227627   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:03.227700   22837 ssh_runner.go:195] Run: cat /version.json
	I0924 18:41:03.227725   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:03.230122   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.230442   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:03.230467   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.230533   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.230587   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:03.230756   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:03.230907   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:03.230941   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:03.230962   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.231017   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:03.231113   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:03.231229   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:03.231324   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:03.231424   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:03.307645   22837 ssh_runner.go:195] Run: systemctl --version
	I0924 18:41:03.331733   22837 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 18:41:03.485763   22837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 18:41:03.491914   22837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 18:41:03.491985   22837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:41:03.507429   22837 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 18:41:03.507461   22837 start.go:495] detecting cgroup driver to use...
	I0924 18:41:03.507517   22837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 18:41:03.523186   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 18:41:03.536999   22837 docker.go:217] disabling cri-docker service (if available) ...
	I0924 18:41:03.537069   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 18:41:03.550683   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 18:41:03.564455   22837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 18:41:03.675808   22837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 18:41:03.815291   22837 docker.go:233] disabling docker service ...
	I0924 18:41:03.815369   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 18:41:03.829457   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 18:41:03.842075   22837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 18:41:03.968977   22837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 18:41:04.100834   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 18:41:04.114151   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:41:04.131432   22837 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 18:41:04.131492   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.141141   22837 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 18:41:04.141212   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.150778   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.160259   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.169851   22837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:41:04.179488   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.189760   22837 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.206045   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.215615   22837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:41:04.224420   22837 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 18:41:04.224481   22837 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 18:41:04.237154   22837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:41:04.245941   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:41:04.372069   22837 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 18:41:04.462010   22837 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 18:41:04.462086   22837 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 18:41:04.466695   22837 start.go:563] Will wait 60s for crictl version
	I0924 18:41:04.466753   22837 ssh_runner.go:195] Run: which crictl
	I0924 18:41:04.470287   22837 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 18:41:04.509294   22837 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 18:41:04.509389   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:41:04.538739   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:41:04.567366   22837 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 18:41:04.568751   22837 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:41:04.571725   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:04.572167   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:04.572191   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:04.572415   22837 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 18:41:04.576247   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:41:04.588081   22837 kubeadm.go:883] updating cluster {Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 18:41:04.588171   22837 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:41:04.588210   22837 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:41:04.618331   22837 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 18:41:04.618391   22837 ssh_runner.go:195] Run: which lz4
	I0924 18:41:04.622176   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0924 18:41:04.622306   22837 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 18:41:04.626507   22837 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 18:41:04.626538   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 18:41:05.822721   22837 crio.go:462] duration metric: took 1.200469004s to copy over tarball
	I0924 18:41:05.822802   22837 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 18:41:07.793883   22837 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.971051538s)
	I0924 18:41:07.793914   22837 crio.go:469] duration metric: took 1.971161974s to extract the tarball
	I0924 18:41:07.793928   22837 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 18:41:07.830067   22837 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:41:07.873646   22837 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 18:41:07.873666   22837 cache_images.go:84] Images are preloaded, skipping loading
	I0924 18:41:07.873673   22837 kubeadm.go:934] updating node { 192.168.39.7 8443 v1.31.1 crio true true} ...
	I0924 18:41:07.873776   22837 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-685475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 18:41:07.873869   22837 ssh_runner.go:195] Run: crio config
	I0924 18:41:07.919600   22837 cni.go:84] Creating CNI manager for ""
	I0924 18:41:07.919618   22837 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0924 18:41:07.919627   22837 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 18:41:07.919646   22837 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-685475 NodeName:ha-685475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 18:41:07.919771   22837 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-685475"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 18:41:07.919801   22837 kube-vip.go:115] generating kube-vip config ...
	I0924 18:41:07.919842   22837 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 18:41:07.935217   22837 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 18:41:07.935310   22837 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0924 18:41:07.935358   22837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:41:07.945016   22837 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 18:41:07.945087   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0924 18:41:07.954390   22837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0924 18:41:07.970734   22837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:41:07.986979   22837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0924 18:41:08.003862   22837 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0924 18:41:08.020369   22837 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 18:41:08.024317   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:41:08.036613   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:41:08.156453   22837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:41:08.174003   22837 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475 for IP: 192.168.39.7
	I0924 18:41:08.174027   22837 certs.go:194] generating shared ca certs ...
	I0924 18:41:08.174053   22837 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.174225   22837 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 18:41:08.174336   22837 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 18:41:08.174354   22837 certs.go:256] generating profile certs ...
	I0924 18:41:08.174424   22837 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key
	I0924 18:41:08.174441   22837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt with IP's: []
	I0924 18:41:08.287248   22837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt ...
	I0924 18:41:08.287273   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt: {Name:mkaceb17faeee44eeb1f13a92453dd9237d1455b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.287463   22837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key ...
	I0924 18:41:08.287478   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key: {Name:mkbd762d73e102d20739c242c4dc875214afceba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.287585   22837 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.2dedd2ac
	I0924 18:41:08.287601   22837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.2dedd2ac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.254]
	I0924 18:41:08.420508   22837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.2dedd2ac ...
	I0924 18:41:08.420553   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.2dedd2ac: {Name:mk9b48c67c74aab074e9cdcef91880f465361f38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.420805   22837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.2dedd2ac ...
	I0924 18:41:08.420830   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.2dedd2ac: {Name:mk62b56ebe2e46561c15a5b3088127454fecceb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.420950   22837 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.2dedd2ac -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt
	I0924 18:41:08.421025   22837 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.2dedd2ac -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key
	I0924 18:41:08.421075   22837 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key
	I0924 18:41:08.421093   22837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt with IP's: []
	I0924 18:41:08.543472   22837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt ...
	I0924 18:41:08.543508   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt: {Name:mk21cf6990553b97f2812e699190b5a379943f0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.543691   22837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key ...
	I0924 18:41:08.543706   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key: {Name:mk47726c7ba1340c780d325e14f433f9d0586f15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.543805   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 18:41:08.543829   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 18:41:08.543844   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 18:41:08.543860   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 18:41:08.543879   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 18:41:08.543898   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 18:41:08.543917   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 18:41:08.543935   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 18:41:08.543997   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 18:41:08.544044   22837 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 18:41:08.544059   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 18:41:08.544094   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 18:41:08.544127   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:41:08.544158   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 18:41:08.544210   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:41:08.544249   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem -> /usr/share/ca-certificates/10949.pem
	I0924 18:41:08.544270   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /usr/share/ca-certificates/109492.pem
	I0924 18:41:08.544289   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:08.544858   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:41:08.570597   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 18:41:08.594223   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:41:08.617808   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 18:41:08.641632   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 18:41:08.665659   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 18:41:08.689661   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:41:08.713308   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 18:41:08.737197   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 18:41:08.762148   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 18:41:08.788186   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:41:08.813589   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 18:41:08.831743   22837 ssh_runner.go:195] Run: openssl version
	I0924 18:41:08.837364   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 18:41:08.849428   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 18:41:08.854475   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 18:41:08.854538   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 18:41:08.860154   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 18:41:08.871267   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:41:08.882296   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:08.886561   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:08.886625   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:08.892075   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:41:08.902853   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 18:41:08.913706   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 18:41:08.917998   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 18:41:08.918060   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 18:41:08.923875   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 18:41:08.937683   22837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:41:08.942083   22837 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 18:41:08.942144   22837 kubeadm.go:392] StartCluster: {Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:41:08.942205   22837 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 18:41:08.942246   22837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 18:41:08.996144   22837 cri.go:89] found id: ""
	I0924 18:41:08.996211   22837 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 18:41:09.006172   22837 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 18:41:09.015736   22837 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 18:41:09.025439   22837 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 18:41:09.025460   22837 kubeadm.go:157] found existing configuration files:
	
	I0924 18:41:09.025508   22837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 18:41:09.034746   22837 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 18:41:09.034800   22837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 18:41:09.044191   22837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 18:41:09.053192   22837 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 18:41:09.053253   22837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 18:41:09.062560   22837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 18:41:09.071543   22837 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 18:41:09.071616   22837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 18:41:09.080990   22837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 18:41:09.089937   22837 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 18:41:09.090011   22837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 18:41:09.099338   22837 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 18:41:09.200102   22837 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 18:41:09.200206   22837 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 18:41:09.288288   22837 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 18:41:09.288440   22837 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 18:41:09.288580   22837 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 18:41:09.299649   22837 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 18:41:09.414648   22837 out.go:235]   - Generating certificates and keys ...
	I0924 18:41:09.414792   22837 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 18:41:09.414929   22837 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 18:41:09.453019   22837 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 18:41:09.665252   22837 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 18:41:09.786773   22837 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 18:41:09.895285   22837 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 18:41:10.253463   22837 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 18:41:10.253620   22837 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-685475 localhost] and IPs [192.168.39.7 127.0.0.1 ::1]
	I0924 18:41:10.418238   22837 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 18:41:10.418481   22837 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-685475 localhost] and IPs [192.168.39.7 127.0.0.1 ::1]
	I0924 18:41:10.573281   22837 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 18:41:10.657693   22837 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 18:41:10.807528   22837 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 18:41:10.807638   22837 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 18:41:10.929209   22837 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 18:41:11.169941   22837 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 18:41:11.264501   22837 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 18:41:11.399230   22837 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 18:41:11.616228   22837 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 18:41:11.616627   22837 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 18:41:11.619943   22837 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 18:41:11.621650   22837 out.go:235]   - Booting up control plane ...
	I0924 18:41:11.621746   22837 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 18:41:11.621863   22837 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 18:41:11.621965   22837 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 18:41:11.642334   22837 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 18:41:11.648424   22837 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 18:41:11.648483   22837 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 18:41:11.789428   22837 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 18:41:11.789563   22837 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 18:41:12.790634   22837 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001755257s
	I0924 18:41:12.790735   22837 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 18:41:18.478058   22837 kubeadm.go:310] [api-check] The API server is healthy after 5.68964956s
	I0924 18:41:18.493860   22837 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 18:41:18.510122   22837 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 18:41:18.541786   22837 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 18:41:18.541987   22837 kubeadm.go:310] [mark-control-plane] Marking the node ha-685475 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 18:41:18.554344   22837 kubeadm.go:310] [bootstrap-token] Using token: 7i3lxo.hk68lojtv0dswhd7
	I0924 18:41:18.555710   22837 out.go:235]   - Configuring RBAC rules ...
	I0924 18:41:18.555857   22837 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 18:41:18.562776   22837 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 18:41:18.572835   22837 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 18:41:18.581420   22837 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 18:41:18.584989   22837 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 18:41:18.590727   22837 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 18:41:18.886783   22837 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 18:41:19.308273   22837 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 18:41:19.885351   22837 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 18:41:19.886864   22837 kubeadm.go:310] 
	I0924 18:41:19.886947   22837 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 18:41:19.886955   22837 kubeadm.go:310] 
	I0924 18:41:19.887084   22837 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 18:41:19.887110   22837 kubeadm.go:310] 
	I0924 18:41:19.887149   22837 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 18:41:19.887252   22837 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 18:41:19.887307   22837 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 18:41:19.887317   22837 kubeadm.go:310] 
	I0924 18:41:19.887400   22837 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 18:41:19.887409   22837 kubeadm.go:310] 
	I0924 18:41:19.887475   22837 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 18:41:19.887492   22837 kubeadm.go:310] 
	I0924 18:41:19.887567   22837 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 18:41:19.887670   22837 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 18:41:19.887778   22837 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 18:41:19.887818   22837 kubeadm.go:310] 
	I0924 18:41:19.887934   22837 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 18:41:19.888013   22837 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 18:41:19.888020   22837 kubeadm.go:310] 
	I0924 18:41:19.888111   22837 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7i3lxo.hk68lojtv0dswhd7 \
	I0924 18:41:19.888252   22837 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 18:41:19.888288   22837 kubeadm.go:310] 	--control-plane 
	I0924 18:41:19.888296   22837 kubeadm.go:310] 
	I0924 18:41:19.888373   22837 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 18:41:19.888384   22837 kubeadm.go:310] 
	I0924 18:41:19.888452   22837 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7i3lxo.hk68lojtv0dswhd7 \
	I0924 18:41:19.888539   22837 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 18:41:19.889407   22837 kubeadm.go:310] W0924 18:41:09.185692     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:41:19.889718   22837 kubeadm.go:310] W0924 18:41:09.186387     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:41:19.889856   22837 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 18:41:19.889883   22837 cni.go:84] Creating CNI manager for ""
	I0924 18:41:19.889890   22837 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0924 18:41:19.892313   22837 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0924 18:41:19.893563   22837 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0924 18:41:19.898820   22837 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0924 18:41:19.898856   22837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0924 18:41:19.916356   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0924 18:41:20.290022   22837 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 18:41:20.290096   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:20.290149   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-685475 minikube.k8s.io/updated_at=2024_09_24T18_41_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=ha-685475 minikube.k8s.io/primary=true
	I0924 18:41:20.340090   22837 ops.go:34] apiserver oom_adj: -16
	I0924 18:41:20.448075   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:20.948257   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:21.448755   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:21.948360   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:22.448489   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:22.948535   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:23.038503   22837 kubeadm.go:1113] duration metric: took 2.748466322s to wait for elevateKubeSystemPrivileges
	I0924 18:41:23.038543   22837 kubeadm.go:394] duration metric: took 14.096402684s to StartCluster
	I0924 18:41:23.038566   22837 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:23.038649   22837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:41:23.039313   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:23.039501   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0924 18:41:23.039502   22837 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:41:23.039576   22837 start.go:241] waiting for startup goroutines ...
	I0924 18:41:23.039526   22837 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 18:41:23.039598   22837 addons.go:69] Setting storage-provisioner=true in profile "ha-685475"
	I0924 18:41:23.039615   22837 addons.go:234] Setting addon storage-provisioner=true in "ha-685475"
	I0924 18:41:23.039616   22837 addons.go:69] Setting default-storageclass=true in profile "ha-685475"
	I0924 18:41:23.039640   22837 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-685475"
	I0924 18:41:23.039645   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:41:23.039696   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:41:23.040106   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.040124   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.040143   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.040155   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.054906   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41319
	I0924 18:41:23.055238   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35097
	I0924 18:41:23.055452   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.055608   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.055957   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.055986   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.056221   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.056245   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.056263   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.056409   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:41:23.056534   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.056961   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.056989   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.058582   22837 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:41:23.058812   22837 kapi.go:59] client config for ha-685475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key", CAFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 18:41:23.059257   22837 cert_rotation.go:140] Starting client certificate rotation controller
	I0924 18:41:23.059411   22837 addons.go:234] Setting addon default-storageclass=true in "ha-685475"
	I0924 18:41:23.059452   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:41:23.059725   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.059753   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.070908   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0924 18:41:23.071353   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.071899   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.071925   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.072270   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.072451   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:41:23.073858   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36557
	I0924 18:41:23.073870   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:23.074183   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.074573   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.074598   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.074991   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.075491   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.075531   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.075879   22837 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 18:41:23.077225   22837 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:41:23.077247   22837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 18:41:23.077265   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:23.079855   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:23.080215   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:23.080236   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:23.080425   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:23.080576   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:23.080722   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:23.080813   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:23.091212   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33839
	I0924 18:41:23.091717   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.092134   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.092151   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.092427   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.092615   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:41:23.094110   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:23.094306   22837 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 18:41:23.094320   22837 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 18:41:23.094337   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:23.097202   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:23.097634   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:23.097661   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:23.097807   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:23.097981   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:23.098125   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:23.098244   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:23.157451   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0924 18:41:23.219332   22837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:41:23.236503   22837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 18:41:23.513482   22837 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0924 18:41:23.780293   22837 main.go:141] libmachine: Making call to close driver server
	I0924 18:41:23.780320   22837 main.go:141] libmachine: (ha-685475) Calling .Close
	I0924 18:41:23.780368   22837 main.go:141] libmachine: Making call to close driver server
	I0924 18:41:23.780387   22837 main.go:141] libmachine: (ha-685475) Calling .Close
	I0924 18:41:23.780643   22837 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:41:23.780651   22837 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:41:23.780659   22837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:41:23.780662   22837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:41:23.780669   22837 main.go:141] libmachine: Making call to close driver server
	I0924 18:41:23.780671   22837 main.go:141] libmachine: Making call to close driver server
	I0924 18:41:23.780677   22837 main.go:141] libmachine: (ha-685475) Calling .Close
	I0924 18:41:23.780679   22837 main.go:141] libmachine: (ha-685475) Calling .Close
	I0924 18:41:23.780872   22837 main.go:141] libmachine: (ha-685475) DBG | Closing plugin on server side
	I0924 18:41:23.780906   22837 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:41:23.780911   22837 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:41:23.780919   22837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:41:23.780919   22837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:41:23.780967   22837 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0924 18:41:23.780985   22837 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0924 18:41:23.781073   22837 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0924 18:41:23.781083   22837 round_trippers.go:469] Request Headers:
	I0924 18:41:23.781093   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:41:23.781099   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:41:23.795500   22837 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0924 18:41:23.796218   22837 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0924 18:41:23.796237   22837 round_trippers.go:469] Request Headers:
	I0924 18:41:23.796248   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:41:23.796255   22837 round_trippers.go:473]     Content-Type: application/json
	I0924 18:41:23.796259   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:41:23.798194   22837 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0924 18:41:23.798350   22837 main.go:141] libmachine: Making call to close driver server
	I0924 18:41:23.798369   22837 main.go:141] libmachine: (ha-685475) Calling .Close
	I0924 18:41:23.798603   22837 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:41:23.798620   22837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:41:23.800167   22837 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0924 18:41:23.801238   22837 addons.go:510] duration metric: took 761.715981ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0924 18:41:23.801274   22837 start.go:246] waiting for cluster config update ...
	I0924 18:41:23.801288   22837 start.go:255] writing updated cluster config ...
	I0924 18:41:23.802705   22837 out.go:201] 
	I0924 18:41:23.804213   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:41:23.804273   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:41:23.806007   22837 out.go:177] * Starting "ha-685475-m02" control-plane node in "ha-685475" cluster
	I0924 18:41:23.807501   22837 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:41:23.807522   22837 cache.go:56] Caching tarball of preloaded images
	I0924 18:41:23.807605   22837 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 18:41:23.807617   22837 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 18:41:23.807680   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:41:23.807853   22837 start.go:360] acquireMachinesLock for ha-685475-m02: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 18:41:23.807905   22837 start.go:364] duration metric: took 31.255µs to acquireMachinesLock for "ha-685475-m02"
	I0924 18:41:23.807922   22837 start.go:93] Provisioning new machine with config: &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:41:23.808020   22837 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0924 18:41:23.809639   22837 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 18:41:23.809702   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.809724   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.823910   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39425
	I0924 18:41:23.824393   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.824838   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.824857   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.825193   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.825352   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetMachineName
	I0924 18:41:23.825501   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:23.825615   22837 start.go:159] libmachine.API.Create for "ha-685475" (driver="kvm2")
	I0924 18:41:23.825634   22837 client.go:168] LocalClient.Create starting
	I0924 18:41:23.825657   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem
	I0924 18:41:23.825684   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:41:23.825697   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:41:23.825743   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem
	I0924 18:41:23.825761   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:41:23.825771   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:41:23.825785   22837 main.go:141] libmachine: Running pre-create checks...
	I0924 18:41:23.825792   22837 main.go:141] libmachine: (ha-685475-m02) Calling .PreCreateCheck
	I0924 18:41:23.825960   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetConfigRaw
	I0924 18:41:23.826338   22837 main.go:141] libmachine: Creating machine...
	I0924 18:41:23.826355   22837 main.go:141] libmachine: (ha-685475-m02) Calling .Create
	I0924 18:41:23.826493   22837 main.go:141] libmachine: (ha-685475-m02) Creating KVM machine...
	I0924 18:41:23.827625   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found existing default KVM network
	I0924 18:41:23.827759   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found existing private KVM network mk-ha-685475
	I0924 18:41:23.827871   22837 main.go:141] libmachine: (ha-685475-m02) Setting up store path in /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02 ...
	I0924 18:41:23.827888   22837 main.go:141] libmachine: (ha-685475-m02) Building disk image from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 18:41:23.827966   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:23.827870   23203 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:41:23.828041   22837 main.go:141] libmachine: (ha-685475-m02) Downloading /home/jenkins/minikube-integration/19700-3751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 18:41:24.081911   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:24.081766   23203 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa...
	I0924 18:41:24.287254   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:24.287116   23203 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/ha-685475-m02.rawdisk...
	I0924 18:41:24.287289   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Writing magic tar header
	I0924 18:41:24.287303   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Writing SSH key tar header
	I0924 18:41:24.287322   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:24.287234   23203 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02 ...
	I0924 18:41:24.287343   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02
	I0924 18:41:24.287363   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02 (perms=drwx------)
	I0924 18:41:24.287376   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines
	I0924 18:41:24.287386   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines (perms=drwxr-xr-x)
	I0924 18:41:24.287429   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:41:24.287454   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube (perms=drwxr-xr-x)
	I0924 18:41:24.287465   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751
	I0924 18:41:24.287486   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751 (perms=drwxrwxr-x)
	I0924 18:41:24.287508   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 18:41:24.287521   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 18:41:24.287531   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins
	I0924 18:41:24.287541   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 18:41:24.287551   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home
	I0924 18:41:24.287560   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Skipping /home - not owner
	I0924 18:41:24.287570   22837 main.go:141] libmachine: (ha-685475-m02) Creating domain...
	I0924 18:41:24.288399   22837 main.go:141] libmachine: (ha-685475-m02) define libvirt domain using xml: 
	I0924 18:41:24.288421   22837 main.go:141] libmachine: (ha-685475-m02) <domain type='kvm'>
	I0924 18:41:24.288434   22837 main.go:141] libmachine: (ha-685475-m02)   <name>ha-685475-m02</name>
	I0924 18:41:24.288441   22837 main.go:141] libmachine: (ha-685475-m02)   <memory unit='MiB'>2200</memory>
	I0924 18:41:24.288467   22837 main.go:141] libmachine: (ha-685475-m02)   <vcpu>2</vcpu>
	I0924 18:41:24.288485   22837 main.go:141] libmachine: (ha-685475-m02)   <features>
	I0924 18:41:24.288491   22837 main.go:141] libmachine: (ha-685475-m02)     <acpi/>
	I0924 18:41:24.288498   22837 main.go:141] libmachine: (ha-685475-m02)     <apic/>
	I0924 18:41:24.288503   22837 main.go:141] libmachine: (ha-685475-m02)     <pae/>
	I0924 18:41:24.288510   22837 main.go:141] libmachine: (ha-685475-m02)     
	I0924 18:41:24.288517   22837 main.go:141] libmachine: (ha-685475-m02)   </features>
	I0924 18:41:24.288525   22837 main.go:141] libmachine: (ha-685475-m02)   <cpu mode='host-passthrough'>
	I0924 18:41:24.288550   22837 main.go:141] libmachine: (ha-685475-m02)   
	I0924 18:41:24.288565   22837 main.go:141] libmachine: (ha-685475-m02)   </cpu>
	I0924 18:41:24.288574   22837 main.go:141] libmachine: (ha-685475-m02)   <os>
	I0924 18:41:24.288586   22837 main.go:141] libmachine: (ha-685475-m02)     <type>hvm</type>
	I0924 18:41:24.288602   22837 main.go:141] libmachine: (ha-685475-m02)     <boot dev='cdrom'/>
	I0924 18:41:24.288616   22837 main.go:141] libmachine: (ha-685475-m02)     <boot dev='hd'/>
	I0924 18:41:24.288629   22837 main.go:141] libmachine: (ha-685475-m02)     <bootmenu enable='no'/>
	I0924 18:41:24.288636   22837 main.go:141] libmachine: (ha-685475-m02)   </os>
	I0924 18:41:24.288648   22837 main.go:141] libmachine: (ha-685475-m02)   <devices>
	I0924 18:41:24.288661   22837 main.go:141] libmachine: (ha-685475-m02)     <disk type='file' device='cdrom'>
	I0924 18:41:24.288679   22837 main.go:141] libmachine: (ha-685475-m02)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/boot2docker.iso'/>
	I0924 18:41:24.288689   22837 main.go:141] libmachine: (ha-685475-m02)       <target dev='hdc' bus='scsi'/>
	I0924 18:41:24.288695   22837 main.go:141] libmachine: (ha-685475-m02)       <readonly/>
	I0924 18:41:24.288703   22837 main.go:141] libmachine: (ha-685475-m02)     </disk>
	I0924 18:41:24.288712   22837 main.go:141] libmachine: (ha-685475-m02)     <disk type='file' device='disk'>
	I0924 18:41:24.288725   22837 main.go:141] libmachine: (ha-685475-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 18:41:24.288738   22837 main.go:141] libmachine: (ha-685475-m02)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/ha-685475-m02.rawdisk'/>
	I0924 18:41:24.288748   22837 main.go:141] libmachine: (ha-685475-m02)       <target dev='hda' bus='virtio'/>
	I0924 18:41:24.288756   22837 main.go:141] libmachine: (ha-685475-m02)     </disk>
	I0924 18:41:24.288767   22837 main.go:141] libmachine: (ha-685475-m02)     <interface type='network'>
	I0924 18:41:24.288778   22837 main.go:141] libmachine: (ha-685475-m02)       <source network='mk-ha-685475'/>
	I0924 18:41:24.288788   22837 main.go:141] libmachine: (ha-685475-m02)       <model type='virtio'/>
	I0924 18:41:24.288796   22837 main.go:141] libmachine: (ha-685475-m02)     </interface>
	I0924 18:41:24.288805   22837 main.go:141] libmachine: (ha-685475-m02)     <interface type='network'>
	I0924 18:41:24.288814   22837 main.go:141] libmachine: (ha-685475-m02)       <source network='default'/>
	I0924 18:41:24.288827   22837 main.go:141] libmachine: (ha-685475-m02)       <model type='virtio'/>
	I0924 18:41:24.288835   22837 main.go:141] libmachine: (ha-685475-m02)     </interface>
	I0924 18:41:24.288848   22837 main.go:141] libmachine: (ha-685475-m02)     <serial type='pty'>
	I0924 18:41:24.288862   22837 main.go:141] libmachine: (ha-685475-m02)       <target port='0'/>
	I0924 18:41:24.288876   22837 main.go:141] libmachine: (ha-685475-m02)     </serial>
	I0924 18:41:24.288885   22837 main.go:141] libmachine: (ha-685475-m02)     <console type='pty'>
	I0924 18:41:24.288892   22837 main.go:141] libmachine: (ha-685475-m02)       <target type='serial' port='0'/>
	I0924 18:41:24.288900   22837 main.go:141] libmachine: (ha-685475-m02)     </console>
	I0924 18:41:24.288911   22837 main.go:141] libmachine: (ha-685475-m02)     <rng model='virtio'>
	I0924 18:41:24.288922   22837 main.go:141] libmachine: (ha-685475-m02)       <backend model='random'>/dev/random</backend>
	I0924 18:41:24.288928   22837 main.go:141] libmachine: (ha-685475-m02)     </rng>
	I0924 18:41:24.288935   22837 main.go:141] libmachine: (ha-685475-m02)     
	I0924 18:41:24.288944   22837 main.go:141] libmachine: (ha-685475-m02)     
	I0924 18:41:24.288956   22837 main.go:141] libmachine: (ha-685475-m02)   </devices>
	I0924 18:41:24.288965   22837 main.go:141] libmachine: (ha-685475-m02) </domain>
	I0924 18:41:24.288975   22837 main.go:141] libmachine: (ha-685475-m02) 
	I0924 18:41:24.294992   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:bf:94:ad in network default
	I0924 18:41:24.295458   22837 main.go:141] libmachine: (ha-685475-m02) Ensuring networks are active...
	I0924 18:41:24.295479   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:24.296154   22837 main.go:141] libmachine: (ha-685475-m02) Ensuring network default is active
	I0924 18:41:24.296453   22837 main.go:141] libmachine: (ha-685475-m02) Ensuring network mk-ha-685475 is active
	I0924 18:41:24.296812   22837 main.go:141] libmachine: (ha-685475-m02) Getting domain xml...
	I0924 18:41:24.297403   22837 main.go:141] libmachine: (ha-685475-m02) Creating domain...
	I0924 18:41:25.511930   22837 main.go:141] libmachine: (ha-685475-m02) Waiting to get IP...
	I0924 18:41:25.512699   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:25.513104   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:25.513143   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:25.513091   23203 retry.go:31] will retry after 234.16067ms: waiting for machine to come up
	I0924 18:41:25.748453   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:25.748989   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:25.749022   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:25.748910   23203 retry.go:31] will retry after 253.354873ms: waiting for machine to come up
	I0924 18:41:26.004434   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:26.004963   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:26.004991   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:26.004930   23203 retry.go:31] will retry after 301.553898ms: waiting for machine to come up
	I0924 18:41:26.308451   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:26.308934   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:26.308961   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:26.308888   23203 retry.go:31] will retry after 500.936612ms: waiting for machine to come up
	I0924 18:41:26.811529   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:26.812030   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:26.812051   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:26.811979   23203 retry.go:31] will retry after 494.430185ms: waiting for machine to come up
	I0924 18:41:27.307617   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:27.308186   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:27.308222   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:27.308158   23203 retry.go:31] will retry after 624.183064ms: waiting for machine to come up
	I0924 18:41:27.933772   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:27.934215   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:27.934243   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:27.934171   23203 retry.go:31] will retry after 1.048717591s: waiting for machine to come up
	I0924 18:41:28.984256   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:28.984722   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:28.984750   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:28.984681   23203 retry.go:31] will retry after 1.344803754s: waiting for machine to come up
	I0924 18:41:30.331184   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:30.331665   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:30.331695   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:30.331611   23203 retry.go:31] will retry after 1.462041717s: waiting for machine to come up
	I0924 18:41:31.796038   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:31.796495   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:31.796521   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:31.796439   23203 retry.go:31] will retry after 1.946036169s: waiting for machine to come up
	I0924 18:41:33.743834   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:33.744264   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:33.744289   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:33.744229   23203 retry.go:31] will retry after 1.953552894s: waiting for machine to come up
	I0924 18:41:35.699784   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:35.700188   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:35.700207   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:35.700142   23203 retry.go:31] will retry after 3.550334074s: waiting for machine to come up
	I0924 18:41:39.251459   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:39.251859   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:39.251883   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:39.251819   23203 retry.go:31] will retry after 3.096214207s: waiting for machine to come up
	I0924 18:41:42.351720   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:42.352147   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:42.352168   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:42.352109   23203 retry.go:31] will retry after 5.133975311s: waiting for machine to come up
	I0924 18:41:47.489864   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.490368   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has current primary IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.490384   22837 main.go:141] libmachine: (ha-685475-m02) Found IP for machine: 192.168.39.17
	I0924 18:41:47.490392   22837 main.go:141] libmachine: (ha-685475-m02) Reserving static IP address...
	I0924 18:41:47.490898   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find host DHCP lease matching {name: "ha-685475-m02", mac: "52:54:00:c4:34:39", ip: "192.168.39.17"} in network mk-ha-685475
	I0924 18:41:47.562679   22837 main.go:141] libmachine: (ha-685475-m02) Reserved static IP address: 192.168.39.17
	I0924 18:41:47.562701   22837 main.go:141] libmachine: (ha-685475-m02) Waiting for SSH to be available...
	I0924 18:41:47.562710   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Getting to WaitForSSH function...
	I0924 18:41:47.565356   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.565738   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:47.565768   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.565964   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Using SSH client type: external
	I0924 18:41:47.565988   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa (-rw-------)
	I0924 18:41:47.566029   22837 main.go:141] libmachine: (ha-685475-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:41:47.566047   22837 main.go:141] libmachine: (ha-685475-m02) DBG | About to run SSH command:
	I0924 18:41:47.566064   22837 main.go:141] libmachine: (ha-685475-m02) DBG | exit 0
	I0924 18:41:47.686618   22837 main.go:141] libmachine: (ha-685475-m02) DBG | SSH cmd err, output: <nil>: 
	I0924 18:41:47.686909   22837 main.go:141] libmachine: (ha-685475-m02) KVM machine creation complete!
	I0924 18:41:47.687246   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetConfigRaw
	I0924 18:41:47.687732   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:47.687897   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:47.688053   22837 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 18:41:47.688065   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetState
	I0924 18:41:47.689263   22837 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 18:41:47.689278   22837 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 18:41:47.689283   22837 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 18:41:47.689288   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:47.691350   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.691620   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:47.691646   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.691809   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:47.691967   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.692084   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.692218   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:47.692337   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:47.692527   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:47.692540   22837 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 18:41:47.794027   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:41:47.794050   22837 main.go:141] libmachine: Detecting the provisioner...
	I0924 18:41:47.794060   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:47.796879   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.797224   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:47.797254   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.797407   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:47.797704   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.797913   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.798111   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:47.798287   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:47.798451   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:47.798462   22837 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 18:41:47.903254   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 18:41:47.903300   22837 main.go:141] libmachine: found compatible host: buildroot
	I0924 18:41:47.903305   22837 main.go:141] libmachine: Provisioning with buildroot...
	I0924 18:41:47.903313   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetMachineName
	I0924 18:41:47.903564   22837 buildroot.go:166] provisioning hostname "ha-685475-m02"
	I0924 18:41:47.903593   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetMachineName
	I0924 18:41:47.903777   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:47.906337   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.906672   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:47.906694   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.906854   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:47.907009   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.907154   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.907284   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:47.907446   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:47.907641   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:47.907655   22837 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-685475-m02 && echo "ha-685475-m02" | sudo tee /etc/hostname
	I0924 18:41:48.025784   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685475-m02
	
	I0924 18:41:48.025820   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.028558   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.028880   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.028907   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.029107   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.029274   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.029415   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.029559   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.029722   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:48.029915   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:48.029932   22837 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-685475-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-685475-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-685475-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 18:41:48.139194   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:41:48.139227   22837 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 18:41:48.139248   22837 buildroot.go:174] setting up certificates
	I0924 18:41:48.139267   22837 provision.go:84] configureAuth start
	I0924 18:41:48.139280   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetMachineName
	I0924 18:41:48.139566   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetIP
	I0924 18:41:48.142585   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.143024   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.143053   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.143201   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.145124   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.145481   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.145505   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.145654   22837 provision.go:143] copyHostCerts
	I0924 18:41:48.145692   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:41:48.145726   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 18:41:48.145735   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:41:48.145801   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 18:41:48.145869   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:41:48.145886   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 18:41:48.145891   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:41:48.145915   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 18:41:48.145955   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:41:48.145971   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 18:41:48.145977   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:41:48.145998   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 18:41:48.146040   22837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.ha-685475-m02 san=[127.0.0.1 192.168.39.17 ha-685475-m02 localhost minikube]
	I0924 18:41:48.245573   22837 provision.go:177] copyRemoteCerts
	I0924 18:41:48.245622   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 18:41:48.245643   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.248802   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.249274   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.249306   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.249504   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.249706   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.249847   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.249994   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa Username:docker}
	I0924 18:41:48.328761   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 18:41:48.328834   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 18:41:48.362627   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 18:41:48.362710   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 18:41:48.384868   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 18:41:48.384964   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 18:41:48.408148   22837 provision.go:87] duration metric: took 268.869175ms to configureAuth
	I0924 18:41:48.408177   22837 buildroot.go:189] setting minikube options for container-runtime
	I0924 18:41:48.408340   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:41:48.408409   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.410657   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.411048   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.411073   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.411241   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.411430   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.411632   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.411784   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.411937   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:48.412089   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:48.412102   22837 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 18:41:48.621639   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 18:41:48.621659   22837 main.go:141] libmachine: Checking connection to Docker...
	I0924 18:41:48.621667   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetURL
	I0924 18:41:48.622862   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Using libvirt version 6000000
	I0924 18:41:48.624753   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.625070   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.625087   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.625272   22837 main.go:141] libmachine: Docker is up and running!
	I0924 18:41:48.625285   22837 main.go:141] libmachine: Reticulating splines...
	I0924 18:41:48.625291   22837 client.go:171] duration metric: took 24.799650651s to LocalClient.Create
	I0924 18:41:48.625312   22837 start.go:167] duration metric: took 24.799696127s to libmachine.API.Create "ha-685475"
	I0924 18:41:48.625325   22837 start.go:293] postStartSetup for "ha-685475-m02" (driver="kvm2")
	I0924 18:41:48.625340   22837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:41:48.625360   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:48.625542   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:41:48.625572   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.627676   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.628030   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.628052   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.628180   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.628342   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.628517   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.628659   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa Username:docker}
	I0924 18:41:48.708913   22837 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 18:41:48.712956   22837 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 18:41:48.712978   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 18:41:48.713046   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 18:41:48.713130   22837 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 18:41:48.713141   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /etc/ssl/certs/109492.pem
	I0924 18:41:48.713240   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 18:41:48.722192   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:41:48.744383   22837 start.go:296] duration metric: took 119.042113ms for postStartSetup
	I0924 18:41:48.744432   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetConfigRaw
	I0924 18:41:48.745000   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetIP
	I0924 18:41:48.747573   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.747893   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.747910   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.748162   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:41:48.748334   22837 start.go:128] duration metric: took 24.940306164s to createHost
	I0924 18:41:48.748356   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.750542   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.750887   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.750911   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.751015   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.751176   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.751307   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.751425   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.751593   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:48.751774   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:48.751787   22837 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 18:41:48.851074   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727203308.831222046
	
	I0924 18:41:48.851092   22837 fix.go:216] guest clock: 1727203308.831222046
	I0924 18:41:48.851099   22837 fix.go:229] Guest: 2024-09-24 18:41:48.831222046 +0000 UTC Remote: 2024-09-24 18:41:48.748344809 +0000 UTC m=+73.162730067 (delta=82.877237ms)
	I0924 18:41:48.851113   22837 fix.go:200] guest clock delta is within tolerance: 82.877237ms
	I0924 18:41:48.851118   22837 start.go:83] releasing machines lock for "ha-685475-m02", held for 25.043203349s
	I0924 18:41:48.851134   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:48.851348   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetIP
	I0924 18:41:48.853818   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.854112   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.854136   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.856508   22837 out.go:177] * Found network options:
	I0924 18:41:48.857890   22837 out.go:177]   - NO_PROXY=192.168.39.7
	W0924 18:41:48.859133   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 18:41:48.859180   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:48.859668   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:48.859884   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:48.859962   22837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 18:41:48.860002   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	W0924 18:41:48.860062   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 18:41:48.860122   22837 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 18:41:48.860142   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.862654   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.862677   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.863021   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.863046   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.863071   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.863085   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.863235   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.863400   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.863436   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.863592   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.863623   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.863730   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.863735   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa Username:docker}
	I0924 18:41:48.863845   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa Username:docker}
	I0924 18:41:49.100910   22837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 18:41:49.106567   22837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 18:41:49.106646   22837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:41:49.123612   22837 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 18:41:49.123643   22837 start.go:495] detecting cgroup driver to use...
	I0924 18:41:49.123708   22837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 18:41:49.142937   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 18:41:49.156490   22837 docker.go:217] disabling cri-docker service (if available) ...
	I0924 18:41:49.156545   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 18:41:49.169527   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 18:41:49.182177   22837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 18:41:49.291858   22837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 18:41:49.459326   22837 docker.go:233] disabling docker service ...
	I0924 18:41:49.459396   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 18:41:49.472974   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 18:41:49.485001   22837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 18:41:49.613925   22837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 18:41:49.729893   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 18:41:49.742924   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:41:49.760372   22837 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 18:41:49.760435   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.771854   22837 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 18:41:49.771935   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.783072   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.792955   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.802788   22837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:41:49.813021   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.822734   22837 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.838535   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.848192   22837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:41:49.856844   22837 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 18:41:49.856899   22837 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 18:41:49.869401   22837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:41:49.878419   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:41:50.004449   22837 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 18:41:50.089923   22837 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 18:41:50.090004   22837 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 18:41:50.094371   22837 start.go:563] Will wait 60s for crictl version
	I0924 18:41:50.094436   22837 ssh_runner.go:195] Run: which crictl
	I0924 18:41:50.097914   22837 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 18:41:50.136366   22837 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 18:41:50.136456   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:41:50.162234   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:41:50.190445   22837 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 18:41:50.191917   22837 out.go:177]   - env NO_PROXY=192.168.39.7
	I0924 18:41:50.193261   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetIP
	I0924 18:41:50.195868   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:50.196181   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:50.196210   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:50.196416   22837 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 18:41:50.200556   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:41:50.212678   22837 mustload.go:65] Loading cluster: ha-685475
	I0924 18:41:50.212868   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:41:50.213191   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:50.213221   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:50.227693   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40331
	I0924 18:41:50.228149   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:50.228595   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:50.228613   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:50.228905   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:50.229090   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:41:50.230680   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:41:50.230980   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:50.231004   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:50.244907   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46297
	I0924 18:41:50.245219   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:50.245604   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:50.245626   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:50.245901   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:50.246055   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:50.246187   22837 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475 for IP: 192.168.39.17
	I0924 18:41:50.246201   22837 certs.go:194] generating shared ca certs ...
	I0924 18:41:50.246216   22837 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:50.246327   22837 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 18:41:50.246369   22837 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 18:41:50.246378   22837 certs.go:256] generating profile certs ...
	I0924 18:41:50.246440   22837 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key
	I0924 18:41:50.246464   22837 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.8bbab698
	I0924 18:41:50.246474   22837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.8bbab698 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.17 192.168.39.254]
	I0924 18:41:50.598027   22837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.8bbab698 ...
	I0924 18:41:50.598058   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.8bbab698: {Name:mkf8f0e99ce8df80e2d67426d0c1db2d0002fe45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:50.598227   22837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.8bbab698 ...
	I0924 18:41:50.598240   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.8bbab698: {Name:mk2fd7db9063cce26eb5db83e155e40a1d36f1b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:50.598308   22837 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.8bbab698 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt
	I0924 18:41:50.598434   22837 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.8bbab698 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key
	I0924 18:41:50.598561   22837 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key
	I0924 18:41:50.598577   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 18:41:50.598590   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 18:41:50.598601   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 18:41:50.598615   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 18:41:50.598627   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 18:41:50.598639   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 18:41:50.598651   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 18:41:50.598663   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 18:41:50.598707   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 18:41:50.598733   22837 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 18:41:50.598743   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 18:41:50.598763   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 18:41:50.598790   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:41:50.598808   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 18:41:50.598860   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:41:50.598885   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem -> /usr/share/ca-certificates/10949.pem
	I0924 18:41:50.598899   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /usr/share/ca-certificates/109492.pem
	I0924 18:41:50.598912   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:50.598943   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:50.601751   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:50.602261   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:50.602302   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:50.602435   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:50.602632   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:50.602771   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:50.602890   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:50.675173   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0924 18:41:50.679977   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0924 18:41:50.690734   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0924 18:41:50.694531   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0924 18:41:50.704513   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0924 18:41:50.708108   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0924 18:41:50.717272   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0924 18:41:50.721123   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0924 18:41:50.730473   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0924 18:41:50.733963   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0924 18:41:50.742805   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0924 18:41:50.746245   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0924 18:41:50.755896   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:41:50.779844   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 18:41:50.802343   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:41:50.824768   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 18:41:50.846513   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0924 18:41:50.868210   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 18:41:50.890482   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:41:50.912726   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 18:41:50.933992   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 18:41:50.954961   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 18:41:50.976681   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:41:50.999088   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0924 18:41:51.016166   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0924 18:41:51.032873   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0924 18:41:51.047752   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0924 18:41:51.062770   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0924 18:41:51.078108   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0924 18:41:51.093675   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0924 18:41:51.109375   22837 ssh_runner.go:195] Run: openssl version
	I0924 18:41:51.115481   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:41:51.125989   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:51.130012   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:51.130079   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:51.135264   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:41:51.144716   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 18:41:51.154096   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 18:41:51.158032   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 18:41:51.158077   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 18:41:51.163212   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 18:41:51.172662   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 18:41:51.182229   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 18:41:51.186313   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 18:41:51.186363   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 18:41:51.191704   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 18:41:51.202091   22837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:41:51.205856   22837 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 18:41:51.205922   22837 kubeadm.go:934] updating node {m02 192.168.39.17 8443 v1.31.1 crio true true} ...
	I0924 18:41:51.206011   22837 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-685475-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 18:41:51.206039   22837 kube-vip.go:115] generating kube-vip config ...
	I0924 18:41:51.206072   22837 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 18:41:51.221517   22837 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 18:41:51.221584   22837 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0924 18:41:51.221651   22837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:41:51.229924   22837 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0924 18:41:51.229982   22837 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0924 18:41:51.238555   22837 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0924 18:41:51.238577   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 18:41:51.238641   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 18:41:51.238665   22837 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0924 18:41:51.238675   22837 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0924 18:41:51.242749   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0924 18:41:51.242771   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0924 18:41:51.999295   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 18:41:51.999376   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 18:41:52.004346   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0924 18:41:52.004382   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0924 18:41:52.162918   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:41:52.197388   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 18:41:52.197497   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 18:41:52.207217   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0924 18:41:52.207268   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0924 18:41:52.538567   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0924 18:41:52.547052   22837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0924 18:41:52.561548   22837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:41:52.576215   22837 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0924 18:41:52.591227   22837 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 18:41:52.594529   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:41:52.604896   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:41:52.719375   22837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:41:52.736097   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:41:52.736483   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:52.736538   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:52.752065   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36433
	I0924 18:41:52.752444   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:52.752959   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:52.752982   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:52.753304   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:52.753474   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:52.753613   22837 start.go:317] joinCluster: &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:41:52.753696   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0924 18:41:52.753710   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:52.756694   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:52.757114   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:52.757131   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:52.757308   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:52.757468   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:52.757629   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:52.757745   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:52.888925   22837 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:41:52.888975   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7fwv7s.uj3o27m19d4lbaxl --discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-685475-m02 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443"
	I0924 18:42:11.743600   22837 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7fwv7s.uj3o27m19d4lbaxl --discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-685475-m02 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443": (18.8545724s)
	I0924 18:42:11.743651   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0924 18:42:12.256325   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-685475-m02 minikube.k8s.io/updated_at=2024_09_24T18_42_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=ha-685475 minikube.k8s.io/primary=false
	I0924 18:42:12.517923   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-685475-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0924 18:42:12.615905   22837 start.go:319] duration metric: took 19.86228628s to joinCluster
	I0924 18:42:12.616009   22837 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:42:12.616334   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:42:12.617637   22837 out.go:177] * Verifying Kubernetes components...
	I0924 18:42:12.618871   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:42:12.853779   22837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:42:12.878467   22837 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:42:12.878815   22837 kapi.go:59] client config for ha-685475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key", CAFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0924 18:42:12.878931   22837 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.7:8443
	I0924 18:42:12.879186   22837 node_ready.go:35] waiting up to 6m0s for node "ha-685475-m02" to be "Ready" ...
	I0924 18:42:12.879290   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:12.879301   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:12.879309   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:12.879314   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:12.895218   22837 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0924 18:42:13.380409   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:13.380434   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:13.380445   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:13.380450   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:13.385029   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:13.879387   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:13.879410   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:13.879422   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:13.879428   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:13.883592   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:14.380062   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:14.380082   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:14.380090   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:14.380095   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:14.397523   22837 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0924 18:42:14.879492   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:14.879513   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:14.879520   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:14.879526   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:14.882118   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:14.882608   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:15.380119   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:15.380151   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:15.380164   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:15.380170   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:15.383053   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:15.879674   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:15.879694   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:15.879702   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:15.879708   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:15.882714   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:16.379456   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:16.379481   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:16.379490   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:16.379493   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:16.383195   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:16.880066   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:16.880089   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:16.880098   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:16.880105   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:16.882954   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:16.883690   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:17.380052   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:17.380084   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:17.380093   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:17.380096   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:17.384312   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:17.879766   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:17.879786   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:17.879794   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:17.879799   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:17.882650   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:18.379440   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:18.379460   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:18.379468   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:18.379474   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:18.382655   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:18.879894   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:18.879916   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:18.879925   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:18.879931   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:18.883892   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:18.884363   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:19.379514   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:19.379537   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:19.379549   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:19.379555   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:19.383053   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:19.880045   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:19.880066   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:19.880075   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:19.880080   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:19.883375   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:20.380221   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:20.380247   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:20.380256   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:20.380261   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:20.383167   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:20.879751   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:20.879771   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:20.879780   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:20.879784   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:20.883632   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:21.379420   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:21.379440   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:21.379449   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:21.379454   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:21.382852   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:21.383642   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:21.880087   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:21.880120   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:21.880142   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:21.880147   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:21.883894   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:22.379995   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:22.380016   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:22.380024   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:22.380028   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:22.383198   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:22.879355   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:22.879379   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:22.879389   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:22.879394   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:22.882598   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:23.380170   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:23.380191   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:23.380198   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:23.380201   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:23.383280   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:23.383852   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:23.879484   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:23.879505   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:23.879514   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:23.879518   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:23.882485   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:24.380050   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:24.380072   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:24.380080   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:24.380084   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:24.383563   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:24.880157   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:24.880189   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:24.880201   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:24.880208   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:24.883633   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:25.379493   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:25.379514   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:25.379522   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:25.379527   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:25.382668   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:25.880369   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:25.880389   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:25.880398   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:25.880401   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:25.884483   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:25.884968   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:26.380398   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:26.380418   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:26.380426   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:26.380431   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:26.384043   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:26.880095   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:26.880120   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:26.880131   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:26.880136   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:26.884191   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:27.380154   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:27.380180   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:27.380192   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:27.380199   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:27.383272   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:27.879506   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:27.879528   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:27.879539   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:27.879556   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:27.882360   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:28.380188   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:28.380208   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:28.380217   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:28.380222   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:28.383324   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:28.384179   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:28.880029   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:28.880052   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:28.880064   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:28.880072   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:28.883130   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:29.380071   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:29.380098   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:29.380110   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:29.380117   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:29.383220   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:29.880044   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:29.880064   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:29.880072   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:29.880077   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:29.883469   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:30.379846   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:30.379865   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.379873   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.379877   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.382760   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.880337   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:30.880358   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.880367   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.880371   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.883587   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:30.884005   22837 node_ready.go:49] node "ha-685475-m02" has status "Ready":"True"
	I0924 18:42:30.884024   22837 node_ready.go:38] duration metric: took 18.004817095s for node "ha-685475-m02" to be "Ready" ...
	I0924 18:42:30.884035   22837 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:42:30.884109   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:42:30.884120   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.884130   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.884136   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.889226   22837 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 18:42:30.898516   22837 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.898598   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fchhl
	I0924 18:42:30.898608   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.898616   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.898621   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.901236   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.901749   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:30.901762   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.901769   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.901773   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.903992   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.904550   22837 pod_ready.go:93] pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:30.904563   22837 pod_ready.go:82] duration metric: took 6.024673ms for pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.904570   22837 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.904619   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jf7wr
	I0924 18:42:30.904627   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.904634   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.904639   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.907019   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.907540   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:30.907554   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.907560   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.907564   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.909829   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.910347   22837 pod_ready.go:93] pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:30.910361   22837 pod_ready.go:82] duration metric: took 5.783749ms for pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.910369   22837 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.910412   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475
	I0924 18:42:30.910421   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.910427   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.910431   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.912745   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.913606   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:30.913622   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.913632   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.913639   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.916274   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.916867   22837 pod_ready.go:93] pod "etcd-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:30.916881   22837 pod_ready.go:82] duration metric: took 6.50607ms for pod "etcd-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.916889   22837 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.916939   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475-m02
	I0924 18:42:30.916948   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.916955   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.916960   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.919434   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.919982   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:30.919996   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.920003   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.920007   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.921770   22837 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0924 18:42:30.922347   22837 pod_ready.go:93] pod "etcd-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:30.922367   22837 pod_ready.go:82] duration metric: took 5.471344ms for pod "etcd-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.922386   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:31.080824   22837 request.go:632] Waited for 158.3458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475
	I0924 18:42:31.080885   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475
	I0924 18:42:31.080893   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:31.080904   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:31.080910   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:31.084145   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:31.281150   22837 request.go:632] Waited for 196.368053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:31.281219   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:31.281226   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:31.281237   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:31.281243   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:31.284822   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:31.285606   22837 pod_ready.go:93] pod "kube-apiserver-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:31.285626   22837 pod_ready.go:82] duration metric: took 363.227315ms for pod "kube-apiserver-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:31.285638   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:31.480778   22837 request.go:632] Waited for 195.072153ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m02
	I0924 18:42:31.480848   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m02
	I0924 18:42:31.480855   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:31.480868   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:31.480875   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:31.484120   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:31.681047   22837 request.go:632] Waited for 196.341286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:31.681125   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:31.681133   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:31.681148   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:31.681151   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:31.684093   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:31.684648   22837 pod_ready.go:93] pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:31.684666   22837 pod_ready.go:82] duration metric: took 399.019878ms for pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:31.684678   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:31.880772   22837 request.go:632] Waited for 196.018851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475
	I0924 18:42:31.880838   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475
	I0924 18:42:31.880846   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:31.880865   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:31.880873   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:31.884578   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:32.080481   22837 request.go:632] Waited for 195.272795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:32.080548   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:32.080556   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:32.080567   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:32.080574   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:32.083669   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:32.084153   22837 pod_ready.go:93] pod "kube-controller-manager-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:32.084170   22837 pod_ready.go:82] duration metric: took 399.485153ms for pod "kube-controller-manager-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:32.084179   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:32.281286   22837 request.go:632] Waited for 197.043639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m02
	I0924 18:42:32.281361   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m02
	I0924 18:42:32.281367   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:32.281374   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:32.281379   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:32.284317   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:32.481341   22837 request.go:632] Waited for 196.394211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:32.481408   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:32.481414   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:32.481423   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:32.481426   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:32.484712   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:32.485108   22837 pod_ready.go:93] pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:32.485126   22837 pod_ready.go:82] duration metric: took 400.941479ms for pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:32.485135   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b8x2w" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:32.681315   22837 request.go:632] Waited for 196.100251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8x2w
	I0924 18:42:32.681368   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8x2w
	I0924 18:42:32.681374   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:32.681382   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:32.681387   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:32.684555   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:32.880797   22837 request.go:632] Waited for 195.427595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:32.880867   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:32.880875   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:32.880886   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:32.880916   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:32.884757   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:32.885225   22837 pod_ready.go:93] pod "kube-proxy-b8x2w" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:32.885244   22837 pod_ready.go:82] duration metric: took 400.103235ms for pod "kube-proxy-b8x2w" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:32.885253   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dlr8f" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:33.080631   22837 request.go:632] Waited for 195.310618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlr8f
	I0924 18:42:33.080696   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlr8f
	I0924 18:42:33.080703   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:33.080712   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:33.080718   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:33.084028   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:33.281072   22837 request.go:632] Waited for 196.37227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:33.281123   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:33.281128   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:33.281136   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:33.281140   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:33.284485   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:33.285140   22837 pod_ready.go:93] pod "kube-proxy-dlr8f" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:33.285160   22837 pod_ready.go:82] duration metric: took 399.900589ms for pod "kube-proxy-dlr8f" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:33.285169   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:33.481228   22837 request.go:632] Waited for 196.007394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475
	I0924 18:42:33.481285   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475
	I0924 18:42:33.481290   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:33.481297   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:33.481301   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:33.484526   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:33.680916   22837 request.go:632] Waited for 195.378531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:33.681003   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:33.681014   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:33.681027   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:33.681033   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:33.683790   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:33.684472   22837 pod_ready.go:93] pod "kube-scheduler-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:33.684489   22837 pod_ready.go:82] duration metric: took 399.314616ms for pod "kube-scheduler-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:33.684498   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:33.880975   22837 request.go:632] Waited for 196.408433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m02
	I0924 18:42:33.881026   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m02
	I0924 18:42:33.881031   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:33.881038   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:33.881043   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:33.884212   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:34.081232   22837 request.go:632] Waited for 196.342139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:34.081301   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:34.081312   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.081340   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.081347   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.084215   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:34.084885   22837 pod_ready.go:93] pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:34.084905   22837 pod_ready.go:82] duration metric: took 400.399835ms for pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:34.084918   22837 pod_ready.go:39] duration metric: took 3.200860786s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:42:34.084956   22837 api_server.go:52] waiting for apiserver process to appear ...
	I0924 18:42:34.085018   22837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:42:34.099253   22837 api_server.go:72] duration metric: took 21.483198905s to wait for apiserver process to appear ...
	I0924 18:42:34.099269   22837 api_server.go:88] waiting for apiserver healthz status ...
	I0924 18:42:34.099293   22837 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0924 18:42:34.103172   22837 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0924 18:42:34.103230   22837 round_trippers.go:463] GET https://192.168.39.7:8443/version
	I0924 18:42:34.103238   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.103245   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.103249   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.104031   22837 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0924 18:42:34.104219   22837 api_server.go:141] control plane version: v1.31.1
	I0924 18:42:34.104236   22837 api_server.go:131] duration metric: took 4.961214ms to wait for apiserver health ...
	I0924 18:42:34.104242   22837 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 18:42:34.280630   22837 request.go:632] Waited for 176.320456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:42:34.280681   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:42:34.280686   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.280694   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.280697   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.284696   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:34.289267   22837 system_pods.go:59] 17 kube-system pods found
	I0924 18:42:34.289298   22837 system_pods.go:61] "coredns-7c65d6cfc9-fchhl" [dc58fefc-6210-4b70-bd0d-dbf5b093e09a] Running
	I0924 18:42:34.289303   22837 system_pods.go:61] "coredns-7c65d6cfc9-jf7wr" [a616493e-082e-4ae6-8e12-8c4a2b37a985] Running
	I0924 18:42:34.289307   22837 system_pods.go:61] "etcd-ha-685475" [f76413e6-46f1-4914-9ba4-719c8f2b098b] Running
	I0924 18:42:34.289312   22837 system_pods.go:61] "etcd-ha-685475-m02" [f37ad824-aa9c-42e9-b9fa-82423aab2a30] Running
	I0924 18:42:34.289315   22837 system_pods.go:61] "kindnet-ms6qb" [60485f55-3830-4897-b38e-55779662b999] Running
	I0924 18:42:34.289318   22837 system_pods.go:61] "kindnet-pwvfj" [e47e9124-c023-41f2-8b05-5fde3cf09dc1] Running
	I0924 18:42:34.289322   22837 system_pods.go:61] "kube-apiserver-ha-685475" [f7dc1ef7-fba6-48c4-8868-de5eccdbbea3] Running
	I0924 18:42:34.289325   22837 system_pods.go:61] "kube-apiserver-ha-685475-m02" [96b5dd69-0cc4-42d9-a42e-b1665ab1890a] Running
	I0924 18:42:34.289329   22837 system_pods.go:61] "kube-controller-manager-ha-685475" [3d40caef-e1c5-4e4b-9908-cf2767bb686f] Running
	I0924 18:42:34.289333   22837 system_pods.go:61] "kube-controller-manager-ha-685475-m02" [0fb0ca36-0340-49f7-8c5d-acf933c181ad] Running
	I0924 18:42:34.289335   22837 system_pods.go:61] "kube-proxy-b8x2w" [95e65f4e-7461-479a-8743-ce4f891abfcf] Running
	I0924 18:42:34.289339   22837 system_pods.go:61] "kube-proxy-dlr8f" [e463fdb8-b27f-4e4a-8887-6534c92a21aa] Running
	I0924 18:42:34.289341   22837 system_pods.go:61] "kube-scheduler-ha-685475" [b82f1f3f-4c7a-49b3-9dab-ba6dfdd3c2ed] Running
	I0924 18:42:34.289344   22837 system_pods.go:61] "kube-scheduler-ha-685475-m02" [53e1a4b3-4e3a-4d14-9cdf-eedbf83877b4] Running
	I0924 18:42:34.289351   22837 system_pods.go:61] "kube-vip-ha-685475" [ad2ed915-5276-4ba2-b097-df9074e8c2ef] Running
	I0924 18:42:34.289355   22837 system_pods.go:61] "kube-vip-ha-685475-m02" [916f0d4d-70d4-4347-9337-84e5c77ca834] Running
	I0924 18:42:34.289357   22837 system_pods.go:61] "storage-provisioner" [e0f5497a-ae6d-4051-b1bc-c84c91d0fd12] Running
	I0924 18:42:34.289363   22837 system_pods.go:74] duration metric: took 185.114229ms to wait for pod list to return data ...
	I0924 18:42:34.289371   22837 default_sa.go:34] waiting for default service account to be created ...
	I0924 18:42:34.480833   22837 request.go:632] Waited for 191.389799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0924 18:42:34.480905   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0924 18:42:34.480912   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.480920   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.480925   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.484374   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:34.484575   22837 default_sa.go:45] found service account: "default"
	I0924 18:42:34.484590   22837 default_sa.go:55] duration metric: took 195.213451ms for default service account to be created ...
	I0924 18:42:34.484598   22837 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 18:42:34.681020   22837 request.go:632] Waited for 196.354693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:42:34.681092   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:42:34.681097   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.681105   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.681113   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.685266   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:34.689541   22837 system_pods.go:86] 17 kube-system pods found
	I0924 18:42:34.689565   22837 system_pods.go:89] "coredns-7c65d6cfc9-fchhl" [dc58fefc-6210-4b70-bd0d-dbf5b093e09a] Running
	I0924 18:42:34.689571   22837 system_pods.go:89] "coredns-7c65d6cfc9-jf7wr" [a616493e-082e-4ae6-8e12-8c4a2b37a985] Running
	I0924 18:42:34.689574   22837 system_pods.go:89] "etcd-ha-685475" [f76413e6-46f1-4914-9ba4-719c8f2b098b] Running
	I0924 18:42:34.689578   22837 system_pods.go:89] "etcd-ha-685475-m02" [f37ad824-aa9c-42e9-b9fa-82423aab2a30] Running
	I0924 18:42:34.689581   22837 system_pods.go:89] "kindnet-ms6qb" [60485f55-3830-4897-b38e-55779662b999] Running
	I0924 18:42:34.689585   22837 system_pods.go:89] "kindnet-pwvfj" [e47e9124-c023-41f2-8b05-5fde3cf09dc1] Running
	I0924 18:42:34.689588   22837 system_pods.go:89] "kube-apiserver-ha-685475" [f7dc1ef7-fba6-48c4-8868-de5eccdbbea3] Running
	I0924 18:42:34.689593   22837 system_pods.go:89] "kube-apiserver-ha-685475-m02" [96b5dd69-0cc4-42d9-a42e-b1665ab1890a] Running
	I0924 18:42:34.689598   22837 system_pods.go:89] "kube-controller-manager-ha-685475" [3d40caef-e1c5-4e4b-9908-cf2767bb686f] Running
	I0924 18:42:34.689603   22837 system_pods.go:89] "kube-controller-manager-ha-685475-m02" [0fb0ca36-0340-49f7-8c5d-acf933c181ad] Running
	I0924 18:42:34.689608   22837 system_pods.go:89] "kube-proxy-b8x2w" [95e65f4e-7461-479a-8743-ce4f891abfcf] Running
	I0924 18:42:34.689616   22837 system_pods.go:89] "kube-proxy-dlr8f" [e463fdb8-b27f-4e4a-8887-6534c92a21aa] Running
	I0924 18:42:34.689623   22837 system_pods.go:89] "kube-scheduler-ha-685475" [b82f1f3f-4c7a-49b3-9dab-ba6dfdd3c2ed] Running
	I0924 18:42:34.689633   22837 system_pods.go:89] "kube-scheduler-ha-685475-m02" [53e1a4b3-4e3a-4d14-9cdf-eedbf83877b4] Running
	I0924 18:42:34.689638   22837 system_pods.go:89] "kube-vip-ha-685475" [ad2ed915-5276-4ba2-b097-df9074e8c2ef] Running
	I0924 18:42:34.689642   22837 system_pods.go:89] "kube-vip-ha-685475-m02" [916f0d4d-70d4-4347-9337-84e5c77ca834] Running
	I0924 18:42:34.689646   22837 system_pods.go:89] "storage-provisioner" [e0f5497a-ae6d-4051-b1bc-c84c91d0fd12] Running
	I0924 18:42:34.689652   22837 system_pods.go:126] duration metric: took 205.048658ms to wait for k8s-apps to be running ...
	I0924 18:42:34.689667   22837 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 18:42:34.689711   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:42:34.702696   22837 system_svc.go:56] duration metric: took 13.022824ms WaitForService to wait for kubelet
	I0924 18:42:34.702718   22837 kubeadm.go:582] duration metric: took 22.086667119s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:42:34.702741   22837 node_conditions.go:102] verifying NodePressure condition ...
	I0924 18:42:34.881196   22837 request.go:632] Waited for 178.393564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes
	I0924 18:42:34.881289   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes
	I0924 18:42:34.881300   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.881308   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.881314   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.885104   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:34.885818   22837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:42:34.885841   22837 node_conditions.go:123] node cpu capacity is 2
	I0924 18:42:34.885858   22837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:42:34.885862   22837 node_conditions.go:123] node cpu capacity is 2
	I0924 18:42:34.885866   22837 node_conditions.go:105] duration metric: took 183.120221ms to run NodePressure ...
	I0924 18:42:34.885879   22837 start.go:241] waiting for startup goroutines ...
	I0924 18:42:34.885917   22837 start.go:255] writing updated cluster config ...
	I0924 18:42:34.888071   22837 out.go:201] 
	I0924 18:42:34.889729   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:42:34.889845   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:42:34.891554   22837 out.go:177] * Starting "ha-685475-m03" control-plane node in "ha-685475" cluster
	I0924 18:42:34.893081   22837 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:42:34.893105   22837 cache.go:56] Caching tarball of preloaded images
	I0924 18:42:34.893223   22837 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 18:42:34.893237   22837 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 18:42:34.893331   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:42:34.893543   22837 start.go:360] acquireMachinesLock for ha-685475-m03: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 18:42:34.893593   22837 start.go:364] duration metric: took 31.193µs to acquireMachinesLock for "ha-685475-m03"
	I0924 18:42:34.893622   22837 start.go:93] Provisioning new machine with config: &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-g
adget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:42:34.893742   22837 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0924 18:42:34.895364   22837 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 18:42:34.895477   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:42:34.895520   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:42:34.910309   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36349
	I0924 18:42:34.910707   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:42:34.911166   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:42:34.911189   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:42:34.911445   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:42:34.911666   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetMachineName
	I0924 18:42:34.911812   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:42:34.911970   22837 start.go:159] libmachine.API.Create for "ha-685475" (driver="kvm2")
	I0924 18:42:34.912006   22837 client.go:168] LocalClient.Create starting
	I0924 18:42:34.912049   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem
	I0924 18:42:34.912087   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:42:34.912107   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:42:34.912168   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem
	I0924 18:42:34.912193   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:42:34.912206   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:42:34.912226   22837 main.go:141] libmachine: Running pre-create checks...
	I0924 18:42:34.912234   22837 main.go:141] libmachine: (ha-685475-m03) Calling .PreCreateCheck
	I0924 18:42:34.912354   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetConfigRaw
	I0924 18:42:34.912664   22837 main.go:141] libmachine: Creating machine...
	I0924 18:42:34.912675   22837 main.go:141] libmachine: (ha-685475-m03) Calling .Create
	I0924 18:42:34.912804   22837 main.go:141] libmachine: (ha-685475-m03) Creating KVM machine...
	I0924 18:42:34.914072   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found existing default KVM network
	I0924 18:42:34.914216   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found existing private KVM network mk-ha-685475
	I0924 18:42:34.914343   22837 main.go:141] libmachine: (ha-685475-m03) Setting up store path in /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03 ...
	I0924 18:42:34.914367   22837 main.go:141] libmachine: (ha-685475-m03) Building disk image from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 18:42:34.914418   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:34.914332   23604 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:42:34.914495   22837 main.go:141] libmachine: (ha-685475-m03) Downloading /home/jenkins/minikube-integration/19700-3751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 18:42:35.139279   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:35.139122   23604 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa...
	I0924 18:42:35.223317   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:35.223211   23604 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/ha-685475-m03.rawdisk...
	I0924 18:42:35.223345   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Writing magic tar header
	I0924 18:42:35.223358   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Writing SSH key tar header
	I0924 18:42:35.223365   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:35.223334   23604 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03 ...
	I0924 18:42:35.223430   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03
	I0924 18:42:35.223477   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03 (perms=drwx------)
	I0924 18:42:35.223494   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines
	I0924 18:42:35.223501   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines (perms=drwxr-xr-x)
	I0924 18:42:35.223508   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:42:35.223518   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube (perms=drwxr-xr-x)
	I0924 18:42:35.223529   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751 (perms=drwxrwxr-x)
	I0924 18:42:35.223535   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 18:42:35.223544   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 18:42:35.223549   22837 main.go:141] libmachine: (ha-685475-m03) Creating domain...
	I0924 18:42:35.223557   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751
	I0924 18:42:35.223562   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 18:42:35.223568   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins
	I0924 18:42:35.223575   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home
	I0924 18:42:35.223580   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Skipping /home - not owner
	I0924 18:42:35.224656   22837 main.go:141] libmachine: (ha-685475-m03) define libvirt domain using xml: 
	I0924 18:42:35.224680   22837 main.go:141] libmachine: (ha-685475-m03) <domain type='kvm'>
	I0924 18:42:35.224689   22837 main.go:141] libmachine: (ha-685475-m03)   <name>ha-685475-m03</name>
	I0924 18:42:35.224694   22837 main.go:141] libmachine: (ha-685475-m03)   <memory unit='MiB'>2200</memory>
	I0924 18:42:35.224699   22837 main.go:141] libmachine: (ha-685475-m03)   <vcpu>2</vcpu>
	I0924 18:42:35.224704   22837 main.go:141] libmachine: (ha-685475-m03)   <features>
	I0924 18:42:35.224709   22837 main.go:141] libmachine: (ha-685475-m03)     <acpi/>
	I0924 18:42:35.224713   22837 main.go:141] libmachine: (ha-685475-m03)     <apic/>
	I0924 18:42:35.224718   22837 main.go:141] libmachine: (ha-685475-m03)     <pae/>
	I0924 18:42:35.224722   22837 main.go:141] libmachine: (ha-685475-m03)     
	I0924 18:42:35.224730   22837 main.go:141] libmachine: (ha-685475-m03)   </features>
	I0924 18:42:35.224736   22837 main.go:141] libmachine: (ha-685475-m03)   <cpu mode='host-passthrough'>
	I0924 18:42:35.224742   22837 main.go:141] libmachine: (ha-685475-m03)   
	I0924 18:42:35.224746   22837 main.go:141] libmachine: (ha-685475-m03)   </cpu>
	I0924 18:42:35.224750   22837 main.go:141] libmachine: (ha-685475-m03)   <os>
	I0924 18:42:35.224756   22837 main.go:141] libmachine: (ha-685475-m03)     <type>hvm</type>
	I0924 18:42:35.224761   22837 main.go:141] libmachine: (ha-685475-m03)     <boot dev='cdrom'/>
	I0924 18:42:35.224770   22837 main.go:141] libmachine: (ha-685475-m03)     <boot dev='hd'/>
	I0924 18:42:35.224784   22837 main.go:141] libmachine: (ha-685475-m03)     <bootmenu enable='no'/>
	I0924 18:42:35.224794   22837 main.go:141] libmachine: (ha-685475-m03)   </os>
	I0924 18:42:35.224799   22837 main.go:141] libmachine: (ha-685475-m03)   <devices>
	I0924 18:42:35.224808   22837 main.go:141] libmachine: (ha-685475-m03)     <disk type='file' device='cdrom'>
	I0924 18:42:35.224840   22837 main.go:141] libmachine: (ha-685475-m03)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/boot2docker.iso'/>
	I0924 18:42:35.224861   22837 main.go:141] libmachine: (ha-685475-m03)       <target dev='hdc' bus='scsi'/>
	I0924 18:42:35.224871   22837 main.go:141] libmachine: (ha-685475-m03)       <readonly/>
	I0924 18:42:35.224885   22837 main.go:141] libmachine: (ha-685475-m03)     </disk>
	I0924 18:42:35.224898   22837 main.go:141] libmachine: (ha-685475-m03)     <disk type='file' device='disk'>
	I0924 18:42:35.224908   22837 main.go:141] libmachine: (ha-685475-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 18:42:35.224920   22837 main.go:141] libmachine: (ha-685475-m03)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/ha-685475-m03.rawdisk'/>
	I0924 18:42:35.224939   22837 main.go:141] libmachine: (ha-685475-m03)       <target dev='hda' bus='virtio'/>
	I0924 18:42:35.224949   22837 main.go:141] libmachine: (ha-685475-m03)     </disk>
	I0924 18:42:35.224954   22837 main.go:141] libmachine: (ha-685475-m03)     <interface type='network'>
	I0924 18:42:35.225004   22837 main.go:141] libmachine: (ha-685475-m03)       <source network='mk-ha-685475'/>
	I0924 18:42:35.225029   22837 main.go:141] libmachine: (ha-685475-m03)       <model type='virtio'/>
	I0924 18:42:35.225048   22837 main.go:141] libmachine: (ha-685475-m03)     </interface>
	I0924 18:42:35.225067   22837 main.go:141] libmachine: (ha-685475-m03)     <interface type='network'>
	I0924 18:42:35.225079   22837 main.go:141] libmachine: (ha-685475-m03)       <source network='default'/>
	I0924 18:42:35.225088   22837 main.go:141] libmachine: (ha-685475-m03)       <model type='virtio'/>
	I0924 18:42:35.225094   22837 main.go:141] libmachine: (ha-685475-m03)     </interface>
	I0924 18:42:35.225101   22837 main.go:141] libmachine: (ha-685475-m03)     <serial type='pty'>
	I0924 18:42:35.225106   22837 main.go:141] libmachine: (ha-685475-m03)       <target port='0'/>
	I0924 18:42:35.225112   22837 main.go:141] libmachine: (ha-685475-m03)     </serial>
	I0924 18:42:35.225118   22837 main.go:141] libmachine: (ha-685475-m03)     <console type='pty'>
	I0924 18:42:35.225124   22837 main.go:141] libmachine: (ha-685475-m03)       <target type='serial' port='0'/>
	I0924 18:42:35.225131   22837 main.go:141] libmachine: (ha-685475-m03)     </console>
	I0924 18:42:35.225144   22837 main.go:141] libmachine: (ha-685475-m03)     <rng model='virtio'>
	I0924 18:42:35.225156   22837 main.go:141] libmachine: (ha-685475-m03)       <backend model='random'>/dev/random</backend>
	I0924 18:42:35.225167   22837 main.go:141] libmachine: (ha-685475-m03)     </rng>
	I0924 18:42:35.225176   22837 main.go:141] libmachine: (ha-685475-m03)     
	I0924 18:42:35.225183   22837 main.go:141] libmachine: (ha-685475-m03)     
	I0924 18:42:35.225192   22837 main.go:141] libmachine: (ha-685475-m03)   </devices>
	I0924 18:42:35.225202   22837 main.go:141] libmachine: (ha-685475-m03) </domain>
	I0924 18:42:35.225210   22837 main.go:141] libmachine: (ha-685475-m03) 
	I0924 18:42:35.232041   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:d0:37:5a in network default
	I0924 18:42:35.232661   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:35.232681   22837 main.go:141] libmachine: (ha-685475-m03) Ensuring networks are active...
	I0924 18:42:35.233409   22837 main.go:141] libmachine: (ha-685475-m03) Ensuring network default is active
	I0924 18:42:35.233744   22837 main.go:141] libmachine: (ha-685475-m03) Ensuring network mk-ha-685475 is active
	I0924 18:42:35.234266   22837 main.go:141] libmachine: (ha-685475-m03) Getting domain xml...
	I0924 18:42:35.235093   22837 main.go:141] libmachine: (ha-685475-m03) Creating domain...
	I0924 18:42:36.442620   22837 main.go:141] libmachine: (ha-685475-m03) Waiting to get IP...
	I0924 18:42:36.443397   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:36.443765   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:36.443802   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:36.443732   23604 retry.go:31] will retry after 244.798943ms: waiting for machine to come up
	I0924 18:42:36.690206   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:36.690698   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:36.690720   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:36.690654   23604 retry.go:31] will retry after 308.672235ms: waiting for machine to come up
	I0924 18:42:37.000890   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:37.001339   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:37.001369   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:37.001302   23604 retry.go:31] will retry after 346.180057ms: waiting for machine to come up
	I0924 18:42:37.348700   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:37.349107   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:37.349134   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:37.349075   23604 retry.go:31] will retry after 530.317337ms: waiting for machine to come up
	I0924 18:42:37.881459   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:37.882098   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:37.882122   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:37.882050   23604 retry.go:31] will retry after 620.764429ms: waiting for machine to come up
	I0924 18:42:38.504892   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:38.505327   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:38.505356   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:38.505288   23604 retry.go:31] will retry after 656.642966ms: waiting for machine to come up
	I0924 18:42:39.163234   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:39.163670   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:39.163696   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:39.163622   23604 retry.go:31] will retry after 804.533823ms: waiting for machine to come up
	I0924 18:42:39.969249   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:39.969758   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:39.969781   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:39.969719   23604 retry.go:31] will retry after 1.112599979s: waiting for machine to come up
	I0924 18:42:41.083861   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:41.084304   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:41.084326   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:41.084250   23604 retry.go:31] will retry after 1.484881709s: waiting for machine to come up
	I0924 18:42:42.570773   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:42.571260   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:42.571291   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:42.571214   23604 retry.go:31] will retry after 1.470650116s: waiting for machine to come up
	I0924 18:42:44.043746   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:44.044161   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:44.044186   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:44.044127   23604 retry.go:31] will retry after 2.749899674s: waiting for machine to come up
	I0924 18:42:46.796154   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:46.796548   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:46.796586   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:46.796499   23604 retry.go:31] will retry after 2.668083753s: waiting for machine to come up
	I0924 18:42:49.467725   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:49.468171   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:49.468196   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:49.468125   23604 retry.go:31] will retry after 4.505913039s: waiting for machine to come up
	I0924 18:42:53.976055   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:53.976513   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:53.976533   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:53.976473   23604 retry.go:31] will retry after 5.05928848s: waiting for machine to come up
	I0924 18:42:59.039895   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.040268   22837 main.go:141] libmachine: (ha-685475-m03) Found IP for machine: 192.168.39.84
	I0924 18:42:59.040292   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has current primary IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.040302   22837 main.go:141] libmachine: (ha-685475-m03) Reserving static IP address...
	I0924 18:42:59.040633   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find host DHCP lease matching {name: "ha-685475-m03", mac: "52:54:00:47:f3:5c", ip: "192.168.39.84"} in network mk-ha-685475
	I0924 18:42:59.109971   22837 main.go:141] libmachine: (ha-685475-m03) Reserved static IP address: 192.168.39.84
	I0924 18:42:59.110001   22837 main.go:141] libmachine: (ha-685475-m03) Waiting for SSH to be available...
	I0924 18:42:59.110011   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Getting to WaitForSSH function...
	I0924 18:42:59.112837   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.113218   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:minikube Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.113243   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.113377   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Using SSH client type: external
	I0924 18:42:59.113400   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa (-rw-------)
	I0924 18:42:59.113429   22837 main.go:141] libmachine: (ha-685475-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:42:59.113441   22837 main.go:141] libmachine: (ha-685475-m03) DBG | About to run SSH command:
	I0924 18:42:59.113458   22837 main.go:141] libmachine: (ha-685475-m03) DBG | exit 0
	I0924 18:42:59.234787   22837 main.go:141] libmachine: (ha-685475-m03) DBG | SSH cmd err, output: <nil>: 
	I0924 18:42:59.235096   22837 main.go:141] libmachine: (ha-685475-m03) KVM machine creation complete!
	I0924 18:42:59.235444   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetConfigRaw
	I0924 18:42:59.235990   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:42:59.236156   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:42:59.236834   22837 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 18:42:59.236851   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetState
	I0924 18:42:59.238058   22837 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 18:42:59.238082   22837 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 18:42:59.238089   22837 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 18:42:59.238099   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:42:59.241168   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.241742   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.241769   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.241929   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:42:59.242092   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.242231   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.242340   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:42:59.242506   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:42:59.242695   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:42:59.242706   22837 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 18:42:59.337829   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:42:59.337850   22837 main.go:141] libmachine: Detecting the provisioner...
	I0924 18:42:59.337860   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:42:59.340431   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.340774   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.340806   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.340930   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:42:59.341115   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.341253   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.341386   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:42:59.341535   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:42:59.341719   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:42:59.341733   22837 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 18:42:59.439659   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 18:42:59.439743   22837 main.go:141] libmachine: found compatible host: buildroot
	I0924 18:42:59.439756   22837 main.go:141] libmachine: Provisioning with buildroot...
	I0924 18:42:59.439767   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetMachineName
	I0924 18:42:59.440013   22837 buildroot.go:166] provisioning hostname "ha-685475-m03"
	I0924 18:42:59.440035   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetMachineName
	I0924 18:42:59.440208   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:42:59.443110   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.443453   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.443484   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.443628   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:42:59.443776   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.443925   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.444043   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:42:59.444195   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:42:59.444388   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:42:59.444405   22837 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-685475-m03 && echo "ha-685475-m03" | sudo tee /etc/hostname
	I0924 18:42:59.552104   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685475-m03
	
	I0924 18:42:59.552146   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:42:59.555198   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.555610   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.555635   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.555825   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:42:59.555999   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.556210   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.556377   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:42:59.556530   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:42:59.556692   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:42:59.556725   22837 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-685475-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-685475-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-685475-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 18:42:59.663026   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:42:59.663065   22837 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 18:42:59.663091   22837 buildroot.go:174] setting up certificates
	I0924 18:42:59.663104   22837 provision.go:84] configureAuth start
	I0924 18:42:59.663128   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetMachineName
	I0924 18:42:59.663405   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetIP
	I0924 18:42:59.666046   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.666433   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.666453   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.666616   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:42:59.668726   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.669069   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.669093   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.669219   22837 provision.go:143] copyHostCerts
	I0924 18:42:59.669250   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:42:59.669289   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 18:42:59.669299   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:42:59.669379   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 18:42:59.669484   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:42:59.669511   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 18:42:59.669521   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:42:59.669559   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 18:42:59.669627   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:42:59.669655   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 18:42:59.669664   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:42:59.669698   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 18:42:59.669771   22837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.ha-685475-m03 san=[127.0.0.1 192.168.39.84 ha-685475-m03 localhost minikube]
	I0924 18:43:00.034638   22837 provision.go:177] copyRemoteCerts
	I0924 18:43:00.034686   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 18:43:00.034707   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:43:00.037567   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.037972   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.037994   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.038177   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.038367   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.038523   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.038654   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa Username:docker}
	I0924 18:43:00.116658   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 18:43:00.116731   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 18:43:00.138751   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 18:43:00.138812   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 18:43:00.160322   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 18:43:00.160404   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 18:43:00.182956   22837 provision.go:87] duration metric: took 519.836065ms to configureAuth
	I0924 18:43:00.182981   22837 buildroot.go:189] setting minikube options for container-runtime
	I0924 18:43:00.183174   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:43:00.183247   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:43:00.186012   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.186463   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.186490   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.186708   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.186905   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.187085   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.187211   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.187369   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:43:00.187586   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:43:00.187604   22837 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 18:43:00.387241   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 18:43:00.387266   22837 main.go:141] libmachine: Checking connection to Docker...
	I0924 18:43:00.387274   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetURL
	I0924 18:43:00.388619   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Using libvirt version 6000000
	I0924 18:43:00.390883   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.391239   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.391267   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.391387   22837 main.go:141] libmachine: Docker is up and running!
	I0924 18:43:00.391407   22837 main.go:141] libmachine: Reticulating splines...
	I0924 18:43:00.391414   22837 client.go:171] duration metric: took 25.479397424s to LocalClient.Create
	I0924 18:43:00.391440   22837 start.go:167] duration metric: took 25.479470372s to libmachine.API.Create "ha-685475"
	I0924 18:43:00.391451   22837 start.go:293] postStartSetup for "ha-685475-m03" (driver="kvm2")
	I0924 18:43:00.391474   22837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:43:00.391492   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:43:00.391777   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:43:00.391810   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:43:00.393710   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.394015   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.394041   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.394165   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.394339   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.394452   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.394556   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa Username:docker}
	I0924 18:43:00.473009   22837 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 18:43:00.477004   22837 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 18:43:00.477028   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 18:43:00.477094   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 18:43:00.477170   22837 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 18:43:00.477183   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /etc/ssl/certs/109492.pem
	I0924 18:43:00.477284   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 18:43:00.486009   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:43:00.508200   22837 start.go:296] duration metric: took 116.732729ms for postStartSetup
	I0924 18:43:00.508250   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetConfigRaw
	I0924 18:43:00.508816   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetIP
	I0924 18:43:00.511555   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.511901   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.511930   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.512205   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:43:00.512420   22837 start.go:128] duration metric: took 25.618667241s to createHost
	I0924 18:43:00.512456   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:43:00.514675   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.515041   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.515063   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.515191   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.515334   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.515443   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.515542   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.515680   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:43:00.515847   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:43:00.515859   22837 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 18:43:00.611172   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727203380.591704428
	
	I0924 18:43:00.611192   22837 fix.go:216] guest clock: 1727203380.591704428
	I0924 18:43:00.611199   22837 fix.go:229] Guest: 2024-09-24 18:43:00.591704428 +0000 UTC Remote: 2024-09-24 18:43:00.512437538 +0000 UTC m=+144.926822798 (delta=79.26689ms)
	I0924 18:43:00.611227   22837 fix.go:200] guest clock delta is within tolerance: 79.26689ms
	I0924 18:43:00.611257   22837 start.go:83] releasing machines lock for "ha-685475-m03", held for 25.717628791s
	I0924 18:43:00.611280   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:43:00.611536   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetIP
	I0924 18:43:00.614210   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.614585   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.614613   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.617023   22837 out.go:177] * Found network options:
	I0924 18:43:00.618386   22837 out.go:177]   - NO_PROXY=192.168.39.7,192.168.39.17
	W0924 18:43:00.619538   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	W0924 18:43:00.619561   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 18:43:00.619572   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:43:00.620007   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:43:00.620146   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:43:00.620209   22837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 18:43:00.620244   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	W0924 18:43:00.620303   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	W0924 18:43:00.620325   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 18:43:00.620388   22837 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 18:43:00.620402   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:43:00.622880   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.623148   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.623312   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.623338   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.623544   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.623554   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.623575   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.623757   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.623767   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.623887   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.623954   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.624007   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.624095   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa Username:docker}
	I0924 18:43:00.624139   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa Username:docker}
	I0924 18:43:00.854971   22837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 18:43:00.860491   22837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 18:43:00.860570   22837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:43:00.875041   22837 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 18:43:00.875064   22837 start.go:495] detecting cgroup driver to use...
	I0924 18:43:00.875138   22837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 18:43:00.890952   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 18:43:00.903982   22837 docker.go:217] disabling cri-docker service (if available) ...
	I0924 18:43:00.904031   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 18:43:00.917362   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 18:43:00.932669   22837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 18:43:01.042282   22837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 18:43:01.188592   22837 docker.go:233] disabling docker service ...
	I0924 18:43:01.188652   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 18:43:01.202602   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 18:43:01.214596   22837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 18:43:01.362941   22837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 18:43:01.483096   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 18:43:01.496147   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:43:01.513707   22837 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 18:43:01.513773   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.523612   22837 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 18:43:01.523679   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.534669   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.544789   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.554357   22837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:43:01.564046   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.573589   22837 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.589268   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.599288   22837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:43:01.609178   22837 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 18:43:01.609244   22837 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 18:43:01.620961   22837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:43:01.629927   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:43:01.745962   22837 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 18:43:01.839298   22837 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 18:43:01.839385   22837 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 18:43:01.843960   22837 start.go:563] Will wait 60s for crictl version
	I0924 18:43:01.844013   22837 ssh_runner.go:195] Run: which crictl
	I0924 18:43:01.847394   22837 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 18:43:01.883086   22837 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 18:43:01.883173   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:43:01.910912   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:43:01.939648   22837 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 18:43:01.941115   22837 out.go:177]   - env NO_PROXY=192.168.39.7
	I0924 18:43:01.942322   22837 out.go:177]   - env NO_PROXY=192.168.39.7,192.168.39.17
	I0924 18:43:01.943445   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetIP
	I0924 18:43:01.945818   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:01.946123   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:01.946145   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:01.946354   22837 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 18:43:01.950271   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:43:01.961605   22837 mustload.go:65] Loading cluster: ha-685475
	I0924 18:43:01.961842   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:43:01.962136   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:43:01.962173   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:43:01.976744   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45917
	I0924 18:43:01.977209   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:43:01.977706   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:43:01.977723   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:43:01.978053   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:43:01.978214   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:43:01.979876   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:43:01.980161   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:43:01.980194   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:43:01.994159   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42457
	I0924 18:43:01.994450   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:43:01.994902   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:43:01.994924   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:43:01.995194   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:43:01.995386   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:43:01.995533   22837 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475 for IP: 192.168.39.84
	I0924 18:43:01.995545   22837 certs.go:194] generating shared ca certs ...
	I0924 18:43:01.995558   22837 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:43:01.995697   22837 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 18:43:01.995733   22837 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 18:43:01.995744   22837 certs.go:256] generating profile certs ...
	I0924 18:43:01.995811   22837 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key
	I0924 18:43:01.995834   22837 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.f075a721
	I0924 18:43:01.995847   22837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.f075a721 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.17 192.168.39.84 192.168.39.254]
	I0924 18:43:02.322791   22837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.f075a721 ...
	I0924 18:43:02.322837   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.f075a721: {Name:mkebefefa2737490c508c384151059616130ea10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:43:02.323013   22837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.f075a721 ...
	I0924 18:43:02.323026   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.f075a721: {Name:mk784db272b18b5ad01513b873f3e2d227a52a52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:43:02.323095   22837 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.f075a721 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt
	I0924 18:43:02.323227   22837 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.f075a721 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key
	I0924 18:43:02.323344   22837 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key
	I0924 18:43:02.323364   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 18:43:02.323377   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 18:43:02.323390   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 18:43:02.323403   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 18:43:02.323415   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 18:43:02.323427   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 18:43:02.323438   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 18:43:02.338931   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 18:43:02.339017   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 18:43:02.339066   22837 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 18:43:02.339077   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 18:43:02.339099   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 18:43:02.339124   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:43:02.339155   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 18:43:02.339192   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:43:02.339227   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /usr/share/ca-certificates/109492.pem
	I0924 18:43:02.339248   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:43:02.339262   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem -> /usr/share/ca-certificates/10949.pem
	I0924 18:43:02.339300   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:43:02.342163   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:43:02.342483   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:43:02.342502   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:43:02.342764   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:43:02.342966   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:43:02.343115   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:43:02.343267   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:43:02.415201   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0924 18:43:02.420165   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0924 18:43:02.429856   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0924 18:43:02.433796   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0924 18:43:02.444492   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0924 18:43:02.448439   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0924 18:43:02.457436   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0924 18:43:02.461533   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0924 18:43:02.470598   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0924 18:43:02.474412   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0924 18:43:02.483836   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0924 18:43:02.487823   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0924 18:43:02.497111   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:43:02.521054   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 18:43:02.543456   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:43:02.568215   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 18:43:02.592612   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0924 18:43:02.615696   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 18:43:02.644606   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:43:02.666219   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 18:43:02.687592   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 18:43:02.709023   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:43:02.730055   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 18:43:02.751785   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0924 18:43:02.766876   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0924 18:43:02.781877   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0924 18:43:02.801467   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0924 18:43:02.818674   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0924 18:43:02.833922   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0924 18:43:02.850197   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0924 18:43:02.867351   22837 ssh_runner.go:195] Run: openssl version
	I0924 18:43:02.872885   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 18:43:02.883212   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 18:43:02.887607   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 18:43:02.887666   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 18:43:02.893210   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 18:43:02.903216   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:43:02.913130   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:43:02.917524   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:43:02.917603   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:43:02.922951   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:43:02.932615   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 18:43:02.942684   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 18:43:02.946739   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 18:43:02.946793   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 18:43:02.952018   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 18:43:02.962341   22837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:43:02.965981   22837 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 18:43:02.966043   22837 kubeadm.go:934] updating node {m03 192.168.39.84 8443 v1.31.1 crio true true} ...
	I0924 18:43:02.966160   22837 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-685475-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 18:43:02.966192   22837 kube-vip.go:115] generating kube-vip config ...
	I0924 18:43:02.966222   22837 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 18:43:02.981139   22837 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 18:43:02.981202   22837 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0924 18:43:02.981266   22837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:43:02.990568   22837 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0924 18:43:02.990634   22837 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0924 18:43:02.999175   22837 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0924 18:43:02.999208   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 18:43:02.999266   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 18:43:02.999178   22837 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0924 18:43:02.999349   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 18:43:02.999180   22837 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0924 18:43:02.999391   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:43:02.999394   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 18:43:03.003117   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0924 18:43:03.003143   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0924 18:43:03.036084   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 18:43:03.036114   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0924 18:43:03.036142   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0924 18:43:03.036201   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 18:43:03.075645   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0924 18:43:03.075686   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0924 18:43:03.823364   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0924 18:43:03.832908   22837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0924 18:43:03.848931   22837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:43:03.864946   22837 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0924 18:43:03.881201   22837 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 18:43:03.885272   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:43:03.896591   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:43:04.021336   22837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:43:04.039285   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:43:04.039604   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:43:04.039646   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:43:04.055236   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41245
	I0924 18:43:04.055694   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:43:04.056178   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:43:04.056193   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:43:04.056537   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:43:04.056733   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:43:04.056878   22837 start.go:317] joinCluster: &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false
istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:43:04.057018   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0924 18:43:04.057041   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:43:04.059760   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:43:04.060326   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:43:04.060356   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:43:04.060505   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:43:04.060673   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:43:04.060817   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:43:04.060972   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:43:04.197827   22837 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:43:04.197878   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f605s0.ormwy1royddhsvvy --discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-685475-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443"
	I0924 18:43:25.103587   22837 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f605s0.ormwy1royddhsvvy --discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-685475-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443": (20.905680905s)
	I0924 18:43:25.103634   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0924 18:43:25.704348   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-685475-m03 minikube.k8s.io/updated_at=2024_09_24T18_43_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=ha-685475 minikube.k8s.io/primary=false
	I0924 18:43:25.818601   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-685475-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0924 18:43:25.943482   22837 start.go:319] duration metric: took 21.886600064s to joinCluster
	I0924 18:43:25.943562   22837 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:43:25.943868   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:43:25.945143   22837 out.go:177] * Verifying Kubernetes components...
	I0924 18:43:25.946900   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:43:26.202957   22837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:43:26.232194   22837 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:43:26.232534   22837 kapi.go:59] client config for ha-685475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key", CAFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0924 18:43:26.232613   22837 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.7:8443
	I0924 18:43:26.232964   22837 node_ready.go:35] waiting up to 6m0s for node "ha-685475-m03" to be "Ready" ...
	I0924 18:43:26.233091   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:26.233102   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:26.233113   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:26.233119   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:26.236798   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:26.733233   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:26.733259   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:26.733268   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:26.733273   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:26.736350   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:27.234119   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:27.234154   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:27.234165   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:27.234175   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:27.240637   22837 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 18:43:27.733351   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:27.733376   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:27.733387   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:27.733394   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:27.742949   22837 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0924 18:43:28.233173   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:28.233194   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:28.233202   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:28.233206   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:28.236224   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:28.237052   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:28.733360   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:28.733382   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:28.733394   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:28.733399   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:28.736288   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:29.233877   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:29.233916   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:29.233928   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:29.233933   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:29.239798   22837 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 18:43:29.733882   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:29.733906   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:29.733918   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:29.733925   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:29.738420   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:30.233669   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:30.233691   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:30.233699   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:30.233702   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:30.237023   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:30.237689   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:30.733690   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:30.733716   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:30.733726   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:30.733733   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:30.736562   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:31.233177   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:31.233204   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:31.233216   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:31.233221   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:31.237262   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:31.733331   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:31.733356   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:31.733368   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:31.733375   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:31.736291   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:32.234100   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:32.234122   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:32.234130   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:32.234134   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:32.237699   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:32.238691   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:32.734110   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:32.734139   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:32.734148   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:32.734156   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:32.737099   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:33.233554   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:33.233581   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:33.233597   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:33.233602   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:33.236923   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:33.733151   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:33.733173   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:33.733181   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:33.733186   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:33.736346   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:34.234015   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:34.234035   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:34.234045   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:34.234049   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:34.237241   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:34.734163   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:34.734184   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:34.734193   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:34.734196   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:34.737761   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:34.738342   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:35.234001   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:35.234024   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:35.234032   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:35.234036   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:35.237606   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:35.733696   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:35.733720   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:35.733730   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:35.733735   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:35.744612   22837 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0924 18:43:36.233198   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:36.233218   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:36.233226   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:36.233230   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:36.236903   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:36.734073   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:36.734097   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:36.734107   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:36.734113   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:36.737583   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:37.234135   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:37.234158   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:37.234166   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:37.234170   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:37.237414   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:37.238235   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:37.733447   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:37.733464   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:37.733472   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:37.733477   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:37.737157   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:38.233502   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:38.233528   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:38.233541   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:38.233550   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:38.236943   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:38.734024   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:38.734049   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:38.734061   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:38.734068   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:38.737560   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:39.233277   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:39.233299   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:39.233307   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:39.233313   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:39.238242   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:39.238885   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:39.733235   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:39.733259   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:39.733265   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:39.733269   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:39.736692   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:40.233260   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:40.233287   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:40.233300   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:40.233308   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:40.236543   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:40.733171   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:40.733195   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:40.733205   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:40.733212   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:40.740055   22837 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 18:43:41.233389   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:41.233414   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:41.233422   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:41.233428   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:41.238076   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:41.733867   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:41.733888   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:41.733896   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:41.733902   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:41.738641   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:41.739398   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:42.233262   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:42.233290   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:42.233307   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:42.233314   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:42.236491   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:42.733416   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:42.733438   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:42.733445   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:42.733450   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:42.736799   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:43.233279   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:43.233299   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.233308   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.233312   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.238341   22837 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 18:43:43.238906   22837 node_ready.go:49] node "ha-685475-m03" has status "Ready":"True"
	I0924 18:43:43.238924   22837 node_ready.go:38] duration metric: took 17.005939201s for node "ha-685475-m03" to be "Ready" ...
	I0924 18:43:43.238932   22837 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:43:43.239003   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:43:43.239014   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.239021   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.239028   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.244370   22837 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 18:43:43.251285   22837 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.251369   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fchhl
	I0924 18:43:43.251380   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.251391   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.251397   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.254058   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.254668   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:43.254684   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.254696   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.254705   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.256747   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.257336   22837 pod_ready.go:93] pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:43.257356   22837 pod_ready.go:82] duration metric: took 6.045735ms for pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.257366   22837 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.257424   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jf7wr
	I0924 18:43:43.257436   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.257446   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.257453   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.259853   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.260510   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:43.260535   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.260545   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.260560   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.262661   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.263075   22837 pod_ready.go:93] pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:43.263089   22837 pod_ready.go:82] duration metric: took 5.713062ms for pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.263099   22837 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.263153   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475
	I0924 18:43:43.263164   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.263173   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.263181   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.265421   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.266025   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:43.266041   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.266051   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.266056   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.268154   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.268655   22837 pod_ready.go:93] pod "etcd-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:43.268677   22837 pod_ready.go:82] duration metric: took 5.571952ms for pod "etcd-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.268686   22837 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.268729   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475-m02
	I0924 18:43:43.268736   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.268743   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.268748   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.270920   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.271534   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:43.271559   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.271569   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.271575   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.273706   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.274155   22837 pod_ready.go:93] pod "etcd-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:43.274174   22837 pod_ready.go:82] duration metric: took 5.482358ms for pod "etcd-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.274182   22837 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.433530   22837 request.go:632] Waited for 159.301092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475-m03
	I0924 18:43:43.433597   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475-m03
	I0924 18:43:43.433607   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.433614   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.433620   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.436812   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:43.633686   22837 request.go:632] Waited for 196.323402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:43.633768   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:43.633775   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.633786   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.633789   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.636913   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:43.637664   22837 pod_ready.go:93] pod "etcd-ha-685475-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:43.637687   22837 pod_ready.go:82] duration metric: took 363.498352ms for pod "etcd-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.637711   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.833926   22837 request.go:632] Waited for 196.128909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475
	I0924 18:43:43.833999   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475
	I0924 18:43:43.834017   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.834032   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.834048   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.837007   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:44.033945   22837 request.go:632] Waited for 196.25ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:44.033995   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:44.034000   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:44.034007   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:44.034013   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:44.037183   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:44.037998   22837 pod_ready.go:93] pod "kube-apiserver-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:44.038015   22837 pod_ready.go:82] duration metric: took 400.293259ms for pod "kube-apiserver-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:44.038024   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:44.233670   22837 request.go:632] Waited for 195.573608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m02
	I0924 18:43:44.233746   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m02
	I0924 18:43:44.233751   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:44.233759   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:44.233770   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:44.236800   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:44.434104   22837 request.go:632] Waited for 196.353101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:44.434150   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:44.434155   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:44.434162   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:44.434166   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:44.437459   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:44.438061   22837 pod_ready.go:93] pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:44.438077   22837 pod_ready.go:82] duration metric: took 400.046958ms for pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:44.438087   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:44.634247   22837 request.go:632] Waited for 196.068994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m03
	I0924 18:43:44.634307   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m03
	I0924 18:43:44.634314   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:44.634323   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:44.634333   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:44.637761   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:44.834009   22837 request.go:632] Waited for 195.341273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:44.834062   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:44.834067   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:44.834075   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:44.834079   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:44.837377   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:44.838102   22837 pod_ready.go:93] pod "kube-apiserver-ha-685475-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:44.838124   22837 pod_ready.go:82] duration metric: took 400.029506ms for pod "kube-apiserver-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:44.838137   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:45.033524   22837 request.go:632] Waited for 195.317742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475
	I0924 18:43:45.033577   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475
	I0924 18:43:45.033583   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:45.033597   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:45.033602   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:45.038542   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:45.233396   22837 request.go:632] Waited for 194.275856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:45.233476   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:45.233483   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:45.233494   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:45.233499   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:45.237836   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:45.238292   22837 pod_ready.go:93] pod "kube-controller-manager-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:45.238309   22837 pod_ready.go:82] duration metric: took 400.16501ms for pod "kube-controller-manager-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:45.238319   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:45.434068   22837 request.go:632] Waited for 195.691023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m02
	I0924 18:43:45.434126   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m02
	I0924 18:43:45.434131   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:45.434138   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:45.434142   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:45.437774   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:45.634002   22837 request.go:632] Waited for 195.223479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:45.634063   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:45.634070   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:45.634080   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:45.634086   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:45.637445   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:45.638048   22837 pod_ready.go:93] pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:45.638072   22837 pod_ready.go:82] duration metric: took 399.746216ms for pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:45.638086   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:45.833552   22837 request.go:632] Waited for 195.400527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m03
	I0924 18:43:45.833619   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m03
	I0924 18:43:45.833626   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:45.833637   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:45.833645   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:45.837253   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:46.033410   22837 request.go:632] Waited for 195.28753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:46.033466   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:46.033471   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:46.033479   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:46.033484   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:46.036819   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:46.037577   22837 pod_ready.go:93] pod "kube-controller-manager-ha-685475-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:46.037601   22837 pod_ready.go:82] duration metric: took 399.507145ms for pod "kube-controller-manager-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:46.037614   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b8x2w" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:46.233664   22837 request.go:632] Waited for 195.987183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8x2w
	I0924 18:43:46.233730   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8x2w
	I0924 18:43:46.233736   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:46.233744   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:46.233751   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:46.236704   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:46.433753   22837 request.go:632] Waited for 196.36056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:46.433836   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:46.433849   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:46.433858   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:46.433864   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:46.436885   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:46.437346   22837 pod_ready.go:93] pod "kube-proxy-b8x2w" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:46.437362   22837 pod_ready.go:82] duration metric: took 399.741929ms for pod "kube-proxy-b8x2w" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:46.437371   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dlr8f" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:46.633383   22837 request.go:632] Waited for 195.935746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlr8f
	I0924 18:43:46.633452   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlr8f
	I0924 18:43:46.633459   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:46.633467   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:46.633472   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:46.636654   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:46.833848   22837 request.go:632] Waited for 196.369969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:46.833916   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:46.833926   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:46.833936   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:46.833944   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:46.836871   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:46.837369   22837 pod_ready.go:93] pod "kube-proxy-dlr8f" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:46.837390   22837 pod_ready.go:82] duration metric: took 400.012248ms for pod "kube-proxy-dlr8f" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:46.837402   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mzlfj" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:47.033325   22837 request.go:632] Waited for 195.841602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mzlfj
	I0924 18:43:47.033432   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mzlfj
	I0924 18:43:47.033444   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:47.033452   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:47.033455   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:47.037080   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:47.234175   22837 request.go:632] Waited for 196.377747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:47.234251   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:47.234257   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:47.234266   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:47.234278   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:47.238255   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:47.238898   22837 pod_ready.go:93] pod "kube-proxy-mzlfj" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:47.238919   22837 pod_ready.go:82] duration metric: took 401.508549ms for pod "kube-proxy-mzlfj" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:47.238933   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:47.433952   22837 request.go:632] Waited for 194.91975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475
	I0924 18:43:47.434033   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475
	I0924 18:43:47.434044   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:47.434055   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:47.434064   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:47.437332   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:47.633347   22837 request.go:632] Waited for 195.287392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:47.633423   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:47.633433   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:47.633441   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:47.633445   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:47.636933   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:47.637777   22837 pod_ready.go:93] pod "kube-scheduler-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:47.637815   22837 pod_ready.go:82] duration metric: took 398.871168ms for pod "kube-scheduler-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:47.637829   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:47.834176   22837 request.go:632] Waited for 196.271361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m02
	I0924 18:43:47.834232   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m02
	I0924 18:43:47.834238   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:47.834246   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:47.834250   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:47.836928   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:48.033993   22837 request.go:632] Waited for 196.330346ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:48.034058   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:48.034064   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.034074   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.034084   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.037490   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:48.038369   22837 pod_ready.go:93] pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:48.038391   22837 pod_ready.go:82] duration metric: took 400.547551ms for pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:48.038404   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:48.233397   22837 request.go:632] Waited for 194.929707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m03
	I0924 18:43:48.233454   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m03
	I0924 18:43:48.233459   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.233467   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.233471   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.236987   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:48.433994   22837 request.go:632] Waited for 196.397643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:48.434055   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:48.434062   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.434073   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.434081   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.437996   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:48.438514   22837 pod_ready.go:93] pod "kube-scheduler-ha-685475-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:48.438617   22837 pod_ready.go:82] duration metric: took 400.123712ms for pod "kube-scheduler-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:48.438680   22837 pod_ready.go:39] duration metric: took 5.199733297s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:43:48.438705   22837 api_server.go:52] waiting for apiserver process to appear ...
	I0924 18:43:48.438774   22837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:43:48.452044   22837 api_server.go:72] duration metric: took 22.508447307s to wait for apiserver process to appear ...
	I0924 18:43:48.452066   22837 api_server.go:88] waiting for apiserver healthz status ...
	I0924 18:43:48.452082   22837 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0924 18:43:48.457867   22837 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0924 18:43:48.457929   22837 round_trippers.go:463] GET https://192.168.39.7:8443/version
	I0924 18:43:48.457937   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.457945   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.457950   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.458795   22837 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0924 18:43:48.458877   22837 api_server.go:141] control plane version: v1.31.1
	I0924 18:43:48.458893   22837 api_server.go:131] duration metric: took 6.820487ms to wait for apiserver health ...
	I0924 18:43:48.458900   22837 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 18:43:48.634297   22837 request.go:632] Waited for 175.332984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:43:48.634358   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:43:48.634374   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.634381   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.634385   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.640434   22837 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 18:43:48.648701   22837 system_pods.go:59] 24 kube-system pods found
	I0924 18:43:48.648727   22837 system_pods.go:61] "coredns-7c65d6cfc9-fchhl" [dc58fefc-6210-4b70-bd0d-dbf5b093e09a] Running
	I0924 18:43:48.648734   22837 system_pods.go:61] "coredns-7c65d6cfc9-jf7wr" [a616493e-082e-4ae6-8e12-8c4a2b37a985] Running
	I0924 18:43:48.648739   22837 system_pods.go:61] "etcd-ha-685475" [f76413e6-46f1-4914-9ba4-719c8f2b098b] Running
	I0924 18:43:48.648744   22837 system_pods.go:61] "etcd-ha-685475-m02" [f37ad824-aa9c-42e9-b9fa-82423aab2a30] Running
	I0924 18:43:48.648749   22837 system_pods.go:61] "etcd-ha-685475-m03" [aa636f08-f0af-4453-b8fd-2637f9edce98] Running
	I0924 18:43:48.648753   22837 system_pods.go:61] "kindnet-7w5dn" [dc2e3477-1c01-4af2-a8b5-0433c75dc3d1] Running
	I0924 18:43:48.648758   22837 system_pods.go:61] "kindnet-ms6qb" [60485f55-3830-4897-b38e-55779662b999] Running
	I0924 18:43:48.648764   22837 system_pods.go:61] "kindnet-pwvfj" [e47e9124-c023-41f2-8b05-5fde3cf09dc1] Running
	I0924 18:43:48.648769   22837 system_pods.go:61] "kube-apiserver-ha-685475" [f7dc1ef7-fba6-48c4-8868-de5eccdbbea3] Running
	I0924 18:43:48.648778   22837 system_pods.go:61] "kube-apiserver-ha-685475-m02" [96b5dd69-0cc4-42d9-a42e-b1665ab1890a] Running
	I0924 18:43:48.648786   22837 system_pods.go:61] "kube-apiserver-ha-685475-m03" [f6efa935-e9a5-4f21-8c4c-571bbe7ab65d] Running
	I0924 18:43:48.648794   22837 system_pods.go:61] "kube-controller-manager-ha-685475" [3d40caef-e1c5-4e4b-9908-cf2767bb686f] Running
	I0924 18:43:48.648799   22837 system_pods.go:61] "kube-controller-manager-ha-685475-m02" [0fb0ca36-0340-49f7-8c5d-acf933c181ad] Running
	I0924 18:43:48.648804   22837 system_pods.go:61] "kube-controller-manager-ha-685475-m03" [0a1e0dac-494b-4892-b945-bf45d87baa4d] Running
	I0924 18:43:48.648810   22837 system_pods.go:61] "kube-proxy-b8x2w" [95e65f4e-7461-479a-8743-ce4f891abfcf] Running
	I0924 18:43:48.648818   22837 system_pods.go:61] "kube-proxy-dlr8f" [e463fdb8-b27f-4e4a-8887-6534c92a21aa] Running
	I0924 18:43:48.648824   22837 system_pods.go:61] "kube-proxy-mzlfj" [2fcf9e88-63de-45cc-b82a-87f1589f9565] Running
	I0924 18:43:48.648829   22837 system_pods.go:61] "kube-scheduler-ha-685475" [b82f1f3f-4c7a-49b3-9dab-ba6dfdd3c2ed] Running
	I0924 18:43:48.648835   22837 system_pods.go:61] "kube-scheduler-ha-685475-m02" [53e1a4b3-4e3a-4d14-9cdf-eedbf83877b4] Running
	I0924 18:43:48.648848   22837 system_pods.go:61] "kube-scheduler-ha-685475-m03" [eee036e1-933e-42d1-9b3d-63f6f13ac6a3] Running
	I0924 18:43:48.648855   22837 system_pods.go:61] "kube-vip-ha-685475" [ad2ed915-5276-4ba2-b097-df9074e8c2ef] Running
	I0924 18:43:48.648860   22837 system_pods.go:61] "kube-vip-ha-685475-m02" [916f0d4d-70d4-4347-9337-84e5c77ca834] Running
	I0924 18:43:48.648867   22837 system_pods.go:61] "kube-vip-ha-685475-m03" [a7e9d21c-45e2-4bcf-9e84-6c2c351d2f68] Running
	I0924 18:43:48.648873   22837 system_pods.go:61] "storage-provisioner" [e0f5497a-ae6d-4051-b1bc-c84c91d0fd12] Running
	I0924 18:43:48.648881   22837 system_pods.go:74] duration metric: took 189.974541ms to wait for pod list to return data ...
	I0924 18:43:48.648894   22837 default_sa.go:34] waiting for default service account to be created ...
	I0924 18:43:48.834315   22837 request.go:632] Waited for 185.353374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0924 18:43:48.834369   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0924 18:43:48.834374   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.834382   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.834385   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.838136   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:48.838236   22837 default_sa.go:45] found service account: "default"
	I0924 18:43:48.838249   22837 default_sa.go:55] duration metric: took 189.347233ms for default service account to be created ...
	I0924 18:43:48.838257   22837 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 18:43:49.033856   22837 request.go:632] Waited for 195.536486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:43:49.033925   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:43:49.033930   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:49.033939   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:49.033944   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:49.040875   22837 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 18:43:49.047492   22837 system_pods.go:86] 24 kube-system pods found
	I0924 18:43:49.047517   22837 system_pods.go:89] "coredns-7c65d6cfc9-fchhl" [dc58fefc-6210-4b70-bd0d-dbf5b093e09a] Running
	I0924 18:43:49.047522   22837 system_pods.go:89] "coredns-7c65d6cfc9-jf7wr" [a616493e-082e-4ae6-8e12-8c4a2b37a985] Running
	I0924 18:43:49.047526   22837 system_pods.go:89] "etcd-ha-685475" [f76413e6-46f1-4914-9ba4-719c8f2b098b] Running
	I0924 18:43:49.047531   22837 system_pods.go:89] "etcd-ha-685475-m02" [f37ad824-aa9c-42e9-b9fa-82423aab2a30] Running
	I0924 18:43:49.047535   22837 system_pods.go:89] "etcd-ha-685475-m03" [aa636f08-f0af-4453-b8fd-2637f9edce98] Running
	I0924 18:43:49.047538   22837 system_pods.go:89] "kindnet-7w5dn" [dc2e3477-1c01-4af2-a8b5-0433c75dc3d1] Running
	I0924 18:43:49.047541   22837 system_pods.go:89] "kindnet-ms6qb" [60485f55-3830-4897-b38e-55779662b999] Running
	I0924 18:43:49.047544   22837 system_pods.go:89] "kindnet-pwvfj" [e47e9124-c023-41f2-8b05-5fde3cf09dc1] Running
	I0924 18:43:49.047549   22837 system_pods.go:89] "kube-apiserver-ha-685475" [f7dc1ef7-fba6-48c4-8868-de5eccdbbea3] Running
	I0924 18:43:49.047553   22837 system_pods.go:89] "kube-apiserver-ha-685475-m02" [96b5dd69-0cc4-42d9-a42e-b1665ab1890a] Running
	I0924 18:43:49.047556   22837 system_pods.go:89] "kube-apiserver-ha-685475-m03" [f6efa935-e9a5-4f21-8c4c-571bbe7ab65d] Running
	I0924 18:43:49.047560   22837 system_pods.go:89] "kube-controller-manager-ha-685475" [3d40caef-e1c5-4e4b-9908-cf2767bb686f] Running
	I0924 18:43:49.047563   22837 system_pods.go:89] "kube-controller-manager-ha-685475-m02" [0fb0ca36-0340-49f7-8c5d-acf933c181ad] Running
	I0924 18:43:49.047567   22837 system_pods.go:89] "kube-controller-manager-ha-685475-m03" [0a1e0dac-494b-4892-b945-bf45d87baa4d] Running
	I0924 18:43:49.047570   22837 system_pods.go:89] "kube-proxy-b8x2w" [95e65f4e-7461-479a-8743-ce4f891abfcf] Running
	I0924 18:43:49.047574   22837 system_pods.go:89] "kube-proxy-dlr8f" [e463fdb8-b27f-4e4a-8887-6534c92a21aa] Running
	I0924 18:43:49.047577   22837 system_pods.go:89] "kube-proxy-mzlfj" [2fcf9e88-63de-45cc-b82a-87f1589f9565] Running
	I0924 18:43:49.047580   22837 system_pods.go:89] "kube-scheduler-ha-685475" [b82f1f3f-4c7a-49b3-9dab-ba6dfdd3c2ed] Running
	I0924 18:43:49.047583   22837 system_pods.go:89] "kube-scheduler-ha-685475-m02" [53e1a4b3-4e3a-4d14-9cdf-eedbf83877b4] Running
	I0924 18:43:49.047586   22837 system_pods.go:89] "kube-scheduler-ha-685475-m03" [eee036e1-933e-42d1-9b3d-63f6f13ac6a3] Running
	I0924 18:43:49.047589   22837 system_pods.go:89] "kube-vip-ha-685475" [ad2ed915-5276-4ba2-b097-df9074e8c2ef] Running
	I0924 18:43:49.047591   22837 system_pods.go:89] "kube-vip-ha-685475-m02" [916f0d4d-70d4-4347-9337-84e5c77ca834] Running
	I0924 18:43:49.047594   22837 system_pods.go:89] "kube-vip-ha-685475-m03" [a7e9d21c-45e2-4bcf-9e84-6c2c351d2f68] Running
	I0924 18:43:49.047597   22837 system_pods.go:89] "storage-provisioner" [e0f5497a-ae6d-4051-b1bc-c84c91d0fd12] Running
	I0924 18:43:49.047603   22837 system_pods.go:126] duration metric: took 209.341697ms to wait for k8s-apps to be running ...
	I0924 18:43:49.047611   22837 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 18:43:49.047657   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:43:49.065856   22837 system_svc.go:56] duration metric: took 18.234674ms WaitForService to wait for kubelet
	I0924 18:43:49.065885   22837 kubeadm.go:582] duration metric: took 23.12228905s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:43:49.065905   22837 node_conditions.go:102] verifying NodePressure condition ...
	I0924 18:43:49.234361   22837 request.go:632] Waited for 168.355831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes
	I0924 18:43:49.234409   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes
	I0924 18:43:49.234415   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:49.234422   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:49.234427   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:49.238548   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:49.242121   22837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:43:49.242144   22837 node_conditions.go:123] node cpu capacity is 2
	I0924 18:43:49.242160   22837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:43:49.242164   22837 node_conditions.go:123] node cpu capacity is 2
	I0924 18:43:49.242167   22837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:43:49.242170   22837 node_conditions.go:123] node cpu capacity is 2
	I0924 18:43:49.242174   22837 node_conditions.go:105] duration metric: took 176.264509ms to run NodePressure ...
	I0924 18:43:49.242184   22837 start.go:241] waiting for startup goroutines ...
	I0924 18:43:49.242210   22837 start.go:255] writing updated cluster config ...
	I0924 18:43:49.242507   22837 ssh_runner.go:195] Run: rm -f paused
	I0924 18:43:49.294738   22837 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 18:43:49.297711   22837 out.go:177] * Done! kubectl is now configured to use "ha-685475" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 24 18:47:21 ha-685475 crio[662]: time="2024-09-24 18:47:21.957875366Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203641957848478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8ee0785-bfb7-4a20-b402-184284439f01 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:21 ha-685475 crio[662]: time="2024-09-24 18:47:21.960028472Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c23d58d7-6f84-4bf9-af19-21d32fb1f336 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:21 ha-685475 crio[662]: time="2024-09-24 18:47:21.960103468Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c23d58d7-6f84-4bf9-af19-21d32fb1f336 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:21 ha-685475 crio[662]: time="2024-09-24 18:47:21.960412845Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203432776977765,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101ffaf02677078c4490807a7a38b8b8077a8323b00e1ef6c7c52dfdf7c323e,PodSandboxId:5cb07ffbc15c1db48161a46e1ce4a69e3d024a8ff62c886643723089f33e75f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203297582724205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297608075068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297571303610,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-62
10-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17272032
85606256261,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203285407492796,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f5664db9017d6a2a0453e30fcd1e13eb349124974c1e07a2d0ba8f50e4c50a,PodSandboxId:8b6709d2b9d03b71e71df6dad09e42d52601e38a0e0ee46ecd31f5480fd75d19,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203275865927706,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b1b3e358bc7b86c05e843e83024d248,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203273109777744,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203273059329172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8,PodSandboxId:480a4fc4d507ff4484472442542def1cc671c1320151a75812f1b0b2d858bf48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203273017878673,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad,PodSandboxId:2ee65b29ae3d23587d2aa4aad308fca9a43ac64a3c3c891ebb43fab609b64f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203272969975750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c23d58d7-6f84-4bf9-af19-21d32fb1f336 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:21 ha-685475 crio[662]: time="2024-09-24 18:47:21.995973403Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eff3bb8d-e2be-4974-9e09-485b5bb02476 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:21 ha-685475 crio[662]: time="2024-09-24 18:47:21.996061533Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eff3bb8d-e2be-4974-9e09-485b5bb02476 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:21 ha-685475 crio[662]: time="2024-09-24 18:47:21.996953899Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73ae69d7-03ad-494e-9818-29e1aec2c3cb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:21 ha-685475 crio[662]: time="2024-09-24 18:47:21.997349942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203641997330128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73ae69d7-03ad-494e-9818-29e1aec2c3cb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:21 ha-685475 crio[662]: time="2024-09-24 18:47:21.997743160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b01214bf-b878-4e35-9e63-05603ee988de name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:21 ha-685475 crio[662]: time="2024-09-24 18:47:21.997925627Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b01214bf-b878-4e35-9e63-05603ee988de name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:21 ha-685475 crio[662]: time="2024-09-24 18:47:21.998189566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203432776977765,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101ffaf02677078c4490807a7a38b8b8077a8323b00e1ef6c7c52dfdf7c323e,PodSandboxId:5cb07ffbc15c1db48161a46e1ce4a69e3d024a8ff62c886643723089f33e75f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203297582724205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297608075068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297571303610,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-62
10-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17272032
85606256261,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203285407492796,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f5664db9017d6a2a0453e30fcd1e13eb349124974c1e07a2d0ba8f50e4c50a,PodSandboxId:8b6709d2b9d03b71e71df6dad09e42d52601e38a0e0ee46ecd31f5480fd75d19,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203275865927706,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b1b3e358bc7b86c05e843e83024d248,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203273109777744,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203273059329172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8,PodSandboxId:480a4fc4d507ff4484472442542def1cc671c1320151a75812f1b0b2d858bf48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203273017878673,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad,PodSandboxId:2ee65b29ae3d23587d2aa4aad308fca9a43ac64a3c3c891ebb43fab609b64f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203272969975750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b01214bf-b878-4e35-9e63-05603ee988de name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:22 ha-685475 crio[662]: time="2024-09-24 18:47:22.031676786Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd53b690-36e9-4565-969b-c9e043220b16 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:22 ha-685475 crio[662]: time="2024-09-24 18:47:22.031862025Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd53b690-36e9-4565-969b-c9e043220b16 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:22 ha-685475 crio[662]: time="2024-09-24 18:47:22.032835133Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e1a2a02-3b42-48bf-ab53-8faba2abaeb4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:22 ha-685475 crio[662]: time="2024-09-24 18:47:22.033223116Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203642033203212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e1a2a02-3b42-48bf-ab53-8faba2abaeb4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:22 ha-685475 crio[662]: time="2024-09-24 18:47:22.033868805Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42164186-2ba3-43a5-8d59-c966acd84b59 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:22 ha-685475 crio[662]: time="2024-09-24 18:47:22.033931903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42164186-2ba3-43a5-8d59-c966acd84b59 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:22 ha-685475 crio[662]: time="2024-09-24 18:47:22.034142387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203432776977765,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101ffaf02677078c4490807a7a38b8b8077a8323b00e1ef6c7c52dfdf7c323e,PodSandboxId:5cb07ffbc15c1db48161a46e1ce4a69e3d024a8ff62c886643723089f33e75f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203297582724205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297608075068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297571303610,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-62
10-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17272032
85606256261,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203285407492796,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f5664db9017d6a2a0453e30fcd1e13eb349124974c1e07a2d0ba8f50e4c50a,PodSandboxId:8b6709d2b9d03b71e71df6dad09e42d52601e38a0e0ee46ecd31f5480fd75d19,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203275865927706,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b1b3e358bc7b86c05e843e83024d248,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203273109777744,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203273059329172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8,PodSandboxId:480a4fc4d507ff4484472442542def1cc671c1320151a75812f1b0b2d858bf48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203273017878673,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad,PodSandboxId:2ee65b29ae3d23587d2aa4aad308fca9a43ac64a3c3c891ebb43fab609b64f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203272969975750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42164186-2ba3-43a5-8d59-c966acd84b59 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:22 ha-685475 crio[662]: time="2024-09-24 18:47:22.070863804Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81f34cff-08ec-4434-ad1a-368f2763ab4b name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:22 ha-685475 crio[662]: time="2024-09-24 18:47:22.070950920Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81f34cff-08ec-4434-ad1a-368f2763ab4b name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:22 ha-685475 crio[662]: time="2024-09-24 18:47:22.071998366Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=720bd6f7-6404-4fe1-bcaa-5db39b05bcba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:22 ha-685475 crio[662]: time="2024-09-24 18:47:22.072399377Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203642072378282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=720bd6f7-6404-4fe1-bcaa-5db39b05bcba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:22 ha-685475 crio[662]: time="2024-09-24 18:47:22.072785382Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=024c5c14-2d36-450a-a069-b91b338f975e name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:22 ha-685475 crio[662]: time="2024-09-24 18:47:22.072900642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=024c5c14-2d36-450a-a069-b91b338f975e name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:22 ha-685475 crio[662]: time="2024-09-24 18:47:22.073119171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203432776977765,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101ffaf02677078c4490807a7a38b8b8077a8323b00e1ef6c7c52dfdf7c323e,PodSandboxId:5cb07ffbc15c1db48161a46e1ce4a69e3d024a8ff62c886643723089f33e75f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203297582724205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297608075068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297571303610,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-62
10-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17272032
85606256261,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203285407492796,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f5664db9017d6a2a0453e30fcd1e13eb349124974c1e07a2d0ba8f50e4c50a,PodSandboxId:8b6709d2b9d03b71e71df6dad09e42d52601e38a0e0ee46ecd31f5480fd75d19,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203275865927706,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b1b3e358bc7b86c05e843e83024d248,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203273109777744,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203273059329172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8,PodSandboxId:480a4fc4d507ff4484472442542def1cc671c1320151a75812f1b0b2d858bf48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203273017878673,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad,PodSandboxId:2ee65b29ae3d23587d2aa4aad308fca9a43ac64a3c3c891ebb43fab609b64f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203272969975750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=024c5c14-2d36-450a-a069-b91b338f975e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9b86d48937d84       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2517ecd8d61cd       busybox-7dff88458-hmkfk
	2c7b4241a9158       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   c2c9f0a12f919       coredns-7c65d6cfc9-jf7wr
	7101ffaf02677       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   5cb07ffbc15c1       storage-provisioner
	75aac96a2239b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   9f53b2b4e4e29       coredns-7c65d6cfc9-fchhl
	709da73468c82       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               0                   6c65efd736505       kindnet-ms6qb
	9ea87ecceac1c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                0                   bbb4cec818818       kube-proxy-b8x2w
	40f5664db9017       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   8b6709d2b9d03       kube-vip-ha-685475
	e62a02dab3075       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   9ade6d826e125       kube-scheduler-ha-685475
	efe5b6f3ceb69       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   5fa1209cd75b8       etcd-ha-685475
	5686da29f7aac       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   480a4fc4d507f       kube-controller-manager-ha-685475
	838b3cda70bf1       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   2ee65b29ae3d2       kube-apiserver-ha-685475
	
	
	==> coredns [2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235] <==
	[INFO] 10.244.2.2:43478 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117921s
	[INFO] 10.244.0.4:52601 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001246s
	[INFO] 10.244.0.4:57647 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118972s
	[INFO] 10.244.0.4:59286 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001434237s
	[INFO] 10.244.0.4:55987 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082081s
	[INFO] 10.244.1.2:44949 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002196411s
	[INFO] 10.244.1.2:57646 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132442s
	[INFO] 10.244.1.2:45986 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001533759s
	[INFO] 10.244.1.2:56859 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159221s
	[INFO] 10.244.1.2:47730 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122802s
	[INFO] 10.244.2.2:49373 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174893s
	[INFO] 10.244.0.4:52492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008787s
	[INFO] 10.244.0.4:33570 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049583s
	[INFO] 10.244.0.4:35717 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000036153s
	[INFO] 10.244.1.2:39348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000262289s
	[INFO] 10.244.1.2:44144 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000216176s
	[INFO] 10.244.1.2:37532 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00017928s
	[INFO] 10.244.2.2:34536 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139562s
	[INFO] 10.244.0.4:43378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108735s
	[INFO] 10.244.0.4:50975 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139299s
	[INFO] 10.244.0.4:36798 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091581s
	[INFO] 10.244.1.2:55450 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136524s
	[INFO] 10.244.1.2:46887 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00019253s
	[INFO] 10.244.1.2:39275 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113225s
	[INFO] 10.244.1.2:44182 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097101s
	
	
	==> coredns [75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f] <==
	[INFO] 10.244.2.2:51539 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.04751056s
	[INFO] 10.244.2.2:56073 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013178352s
	[INFO] 10.244.0.4:46583 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000099115s
	[INFO] 10.244.1.2:39503 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018791s
	[INFO] 10.244.1.2:56200 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000107364s
	[INFO] 10.244.1.2:50181 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000477328s
	[INFO] 10.244.2.2:48517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149349s
	[INFO] 10.244.2.2:37426 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161156s
	[INFO] 10.244.2.2:51780 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245454s
	[INFO] 10.244.0.4:37360 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00192766s
	[INFO] 10.244.0.4:49282 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067708s
	[INFO] 10.244.0.4:50475 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000049077s
	[INFO] 10.244.0.4:42734 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103381s
	[INFO] 10.244.1.2:34090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126966s
	[INFO] 10.244.1.2:49474 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199973s
	[INFO] 10.244.1.2:47488 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080517s
	[INFO] 10.244.2.2:58501 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129358s
	[INFO] 10.244.2.2:35831 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166592s
	[INFO] 10.244.2.2:46260 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105019s
	[INFO] 10.244.0.4:34512 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070631s
	[INFO] 10.244.1.2:40219 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095437s
	[INFO] 10.244.2.2:45584 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000263954s
	[INFO] 10.244.2.2:45346 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105804s
	[INFO] 10.244.2.2:33451 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099783s
	[INFO] 10.244.0.4:54263 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102026s
	
	
	==> describe nodes <==
	Name:               ha-685475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T18_41_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:41:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:47:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:44:23 +0000   Tue, 24 Sep 2024 18:41:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:44:23 +0000   Tue, 24 Sep 2024 18:41:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:44:23 +0000   Tue, 24 Sep 2024 18:41:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:44:23 +0000   Tue, 24 Sep 2024 18:41:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-685475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d6728db94ca4a90af6f3c76683b52c2
	  System UUID:                7d6728db-94ca-4a90-af6f-3c76683b52c2
	  Boot ID:                    d6338982-1afe-44d6-a104-48e80df984ae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hmkfk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 coredns-7c65d6cfc9-fchhl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     5m59s
	  kube-system                 coredns-7c65d6cfc9-jf7wr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     5m59s
	  kube-system                 etcd-ha-685475                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m5s
	  kube-system                 kindnet-ms6qb                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m59s
	  kube-system                 kube-apiserver-ha-685475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-controller-manager-ha-685475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-proxy-b8x2w                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 kube-scheduler-ha-685475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-vip-ha-685475                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m56s  kube-proxy       
	  Normal  Starting                 6m3s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m3s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m3s   kubelet          Node ha-685475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s   kubelet          Node ha-685475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s   kubelet          Node ha-685475 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m     node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	  Normal  NodeReady                5m45s  kubelet          Node ha-685475 status is now: NodeReady
	  Normal  RegisteredNode           5m5s   node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	  Normal  RegisteredNode           3m51s  node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	
	
	Name:               ha-685475-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T18_42_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:42:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:44:53 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 24 Sep 2024 18:44:12 +0000   Tue, 24 Sep 2024 18:45:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 24 Sep 2024 18:44:12 +0000   Tue, 24 Sep 2024 18:45:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 24 Sep 2024 18:44:12 +0000   Tue, 24 Sep 2024 18:45:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 24 Sep 2024 18:44:12 +0000   Tue, 24 Sep 2024 18:45:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-685475-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad56c26961cf4d94852f19122c4c499b
	  System UUID:                ad56c269-61cf-4d94-852f-19122c4c499b
	  Boot ID:                    e772e23b-db48-4470-a822-ef2e8ff749c3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w6g8l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 etcd-ha-685475-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m11s
	  kube-system                 kindnet-pwvfj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m12s
	  kube-system                 kube-apiserver-ha-685475-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-controller-manager-ha-685475-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-proxy-dlr8f                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-scheduler-ha-685475-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-vip-ha-685475-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m13s (x8 over 5m13s)  kubelet          Node ha-685475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s (x8 over 5m13s)  kubelet          Node ha-685475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s (x7 over 5m13s)  kubelet          Node ha-685475-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  RegisteredNode           5m5s                   node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  NodeNotReady             106s                   node-controller  Node ha-685475-m02 status is now: NodeNotReady
	
	
	Name:               ha-685475-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T18_43_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:43:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:47:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:44:24 +0000   Tue, 24 Sep 2024 18:43:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:44:24 +0000   Tue, 24 Sep 2024 18:43:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:44:24 +0000   Tue, 24 Sep 2024 18:43:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:44:24 +0000   Tue, 24 Sep 2024 18:43:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-685475-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 666f55d24f014a7598addca9cb06654f
	  System UUID:                666f55d2-4f01-4a75-98ad-dca9cb06654f
	  Boot ID:                    4a6f3fd5-8906-4dce-b1f1-42fe5e6d144d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gksmx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 etcd-ha-685475-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m58s
	  kube-system                 kindnet-7w5dn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m
	  kube-system                 kube-apiserver-ha-685475-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 kube-controller-manager-ha-685475-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 kube-proxy-mzlfj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-scheduler-ha-685475-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 kube-vip-ha-685475-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 3m55s            kube-proxy       
	  Normal  NodeHasSufficientMemory  4m (x8 over 4m)  kubelet          Node ha-685475-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m (x8 over 4m)  kubelet          Node ha-685475-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m (x7 over 4m)  kubelet          Node ha-685475-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m55s            node-controller  Node ha-685475-m03 event: Registered Node ha-685475-m03 in Controller
	  Normal  RegisteredNode           3m55s            node-controller  Node ha-685475-m03 event: Registered Node ha-685475-m03 in Controller
	  Normal  RegisteredNode           3m51s            node-controller  Node ha-685475-m03 event: Registered Node ha-685475-m03 in Controller
	
	
	Name:               ha-685475-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T18_44_24_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:44:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:47:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:44:54 +0000   Tue, 24 Sep 2024 18:44:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:44:54 +0000   Tue, 24 Sep 2024 18:44:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:44:54 +0000   Tue, 24 Sep 2024 18:44:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:44:54 +0000   Tue, 24 Sep 2024 18:44:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    ha-685475-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5be0e3597a0f4236b1fa9e5e221d49dc
	  System UUID:                5be0e359-7a0f-4236-b1fa-9e5e221d49dc
	  Boot ID:                    076086b0-4e87-4ae6-8221-9f0322235896
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-n4nlv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m58s
	  kube-system                 kube-proxy-9m62z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m58s (x2 over 2m59s)  kubelet          Node ha-685475-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m58s (x2 over 2m59s)  kubelet          Node ha-685475-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m58s (x2 over 2m59s)  kubelet          Node ha-685475-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m56s                  node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal  RegisteredNode           2m55s                  node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal  RegisteredNode           2m55s                  node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal  NodeReady                2m40s                  kubelet          Node ha-685475-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep24 18:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047306] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036787] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.684392] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.705375] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.505519] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep24 18:41] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.056998] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056884] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.156659] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.148421] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.267579] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +3.782999] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +3.621822] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.062553] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.171108] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.082463] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.344664] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.133235] kauditd_printk_skb: 38 callbacks suppressed
	[Sep24 18:42] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707] <==
	{"level":"warn","ts":"2024-09-24T18:47:22.312825Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.324256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.326878Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.336007Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.339276Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.348657Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.361617Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.368001Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.371944Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.374742Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.380774Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.388481Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.394207Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.397407Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.400345Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.405063Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.406080Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.414358Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.422444Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.425264Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.426104Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.428228Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.443762Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.449041Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:22.454745Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:47:22 up 6 min,  0 users,  load average: 0.05, 0.20, 0.12
	Linux ha-685475 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678] <==
	I0924 18:46:46.559253       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:46:56.555306       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:46:56.555342       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:46:56.555521       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:46:56.555558       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:46:56.555611       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:46:56.555617       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:46:56.555657       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:46:56.555675       1 main.go:299] handling current node
	I0924 18:47:06.561634       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:47:06.561692       1 main.go:299] handling current node
	I0924 18:47:06.561710       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:47:06.561715       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:47:06.561848       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:47:06.561866       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:47:06.561914       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:47:06.561931       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:47:16.564762       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:47:16.564893       1 main.go:299] handling current node
	I0924 18:47:16.564926       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:47:16.564945       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:47:16.565064       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:47:16.565119       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:47:16.565194       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:47:16.565212       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad] <==
	I0924 18:41:17.672745       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 18:41:17.723505       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0924 18:41:17.816990       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0924 18:41:17.823594       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.7]
	I0924 18:41:17.824633       1 controller.go:615] quota admission added evaluator for: endpoints
	I0924 18:41:17.829868       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 18:41:18.021888       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0924 18:41:19.286470       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0924 18:41:19.299197       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0924 18:41:19.310963       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0924 18:41:23.075217       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0924 18:41:23.423831       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0924 18:43:54.268115       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58120: use of closed network connection
	E0924 18:43:54.604143       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58158: use of closed network connection
	E0924 18:43:54.783115       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58164: use of closed network connection
	E0924 18:43:54.950893       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58168: use of closed network connection
	E0924 18:43:55.309336       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58194: use of closed network connection
	E0924 18:43:55.511247       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58214: use of closed network connection
	E0924 18:43:55.954224       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58254: use of closed network connection
	E0924 18:43:56.117109       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58266: use of closed network connection
	E0924 18:43:56.281611       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58282: use of closed network connection
	E0924 18:43:56.451342       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58292: use of closed network connection
	E0924 18:43:56.632767       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58308: use of closed network connection
	E0924 18:43:56.794004       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58330: use of closed network connection
	W0924 18:45:17.827671       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.7 192.168.39.84]
	
	
	==> kube-controller-manager [5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8] <==
	I0924 18:44:24.247180       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:24.247492       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:24.265765       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:24.436622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m03"
	I0924 18:44:24.498908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:24.871884       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:26.085940       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:27.805304       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:27.915596       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:27.967113       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:27.968167       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-685475-m04"
	I0924 18:44:28.400258       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:34.420054       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:42.456619       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-685475-m04"
	I0924 18:44:42.456667       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:42.471240       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:42.830571       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:54.874379       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:45:36.091506       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m02"
	I0924 18:45:36.091566       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-685475-m04"
	I0924 18:45:36.110189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m02"
	I0924 18:45:36.281556       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.99566ms"
	I0924 18:45:36.282660       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="101.243µs"
	I0924 18:45:38.045778       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m02"
	I0924 18:45:41.375346       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m02"
	
	
	==> kube-proxy [9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 18:41:25.700409       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 18:41:25.766662       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.7"]
	E0924 18:41:25.766911       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 18:41:25.811114       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 18:41:25.811144       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 18:41:25.811180       1 server_linux.go:169] "Using iptables Proxier"
	I0924 18:41:25.813724       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 18:41:25.814452       1 server.go:483] "Version info" version="v1.31.1"
	I0924 18:41:25.814533       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:41:25.818487       1 config.go:199] "Starting service config controller"
	I0924 18:41:25.819365       1 config.go:105] "Starting endpoint slice config controller"
	I0924 18:41:25.820408       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 18:41:25.820718       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 18:41:25.821642       1 config.go:328] "Starting node config controller"
	I0924 18:41:25.822952       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 18:41:25.921008       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 18:41:25.923339       1 shared_informer.go:320] Caches are synced for node config
	I0924 18:41:25.923395       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc] <==
	W0924 18:41:16.961127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 18:41:16.961178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:16.962189       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 18:41:16.962268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.047239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 18:41:17.047364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.102252       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0924 18:41:17.102364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.222048       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 18:41:17.222166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.230553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 18:41:17.231072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.384731       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 18:41:17.384781       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 18:41:17.385753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 18:41:17.385816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0924 18:41:20.277859       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0924 18:43:50.159728       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w6g8l\": pod busybox-7dff88458-w6g8l is already assigned to node \"ha-685475-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-w6g8l" node="ha-685475-m02"
	E0924 18:43:50.159906       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w6g8l\": pod busybox-7dff88458-w6g8l is already assigned to node \"ha-685475-m02\"" pod="default/busybox-7dff88458-w6g8l"
	E0924 18:43:50.160616       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hmkfk\": pod busybox-7dff88458-hmkfk is already assigned to node \"ha-685475\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hmkfk" node="ha-685475"
	E0924 18:43:50.160683       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hmkfk\": pod busybox-7dff88458-hmkfk is already assigned to node \"ha-685475\"" pod="default/busybox-7dff88458-hmkfk"
	E0924 18:44:24.296261       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9m62z\": pod kube-proxy-9m62z is already assigned to node \"ha-685475-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9m62z" node="ha-685475-m04"
	E0924 18:44:24.296334       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d172ae09-1eb7-4e5d-a5a1-e865b926b6eb(kube-system/kube-proxy-9m62z) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-9m62z"
	E0924 18:44:24.296350       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9m62z\": pod kube-proxy-9m62z is already assigned to node \"ha-685475-m04\"" pod="kube-system/kube-proxy-9m62z"
	I0924 18:44:24.296367       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9m62z" node="ha-685475-m04"
	
	
	==> kubelet <==
	Sep 24 18:46:09 ha-685475 kubelet[1306]: E0924 18:46:09.287410    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203569286673502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:19 ha-685475 kubelet[1306]: E0924 18:46:19.240421    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 18:46:19 ha-685475 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 18:46:19 ha-685475 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 18:46:19 ha-685475 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 18:46:19 ha-685475 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 18:46:19 ha-685475 kubelet[1306]: E0924 18:46:19.289533    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203579289109382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:19 ha-685475 kubelet[1306]: E0924 18:46:19.289568    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203579289109382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:29 ha-685475 kubelet[1306]: E0924 18:46:29.292185    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203589291965830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:29 ha-685475 kubelet[1306]: E0924 18:46:29.292494    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203589291965830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:39 ha-685475 kubelet[1306]: E0924 18:46:39.293680    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203599293434791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:39 ha-685475 kubelet[1306]: E0924 18:46:39.293717    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203599293434791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:49 ha-685475 kubelet[1306]: E0924 18:46:49.295059    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203609294682424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:49 ha-685475 kubelet[1306]: E0924 18:46:49.295397    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203609294682424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:59 ha-685475 kubelet[1306]: E0924 18:46:59.296553    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203619296254794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:59 ha-685475 kubelet[1306]: E0924 18:46:59.296987    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203619296254794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:47:09 ha-685475 kubelet[1306]: E0924 18:47:09.298543    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203629298152404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:47:09 ha-685475 kubelet[1306]: E0924 18:47:09.301982    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203629298152404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:47:19 ha-685475 kubelet[1306]: E0924 18:47:19.239486    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 18:47:19 ha-685475 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 18:47:19 ha-685475 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 18:47:19 ha-685475 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 18:47:19 ha-685475 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 18:47:19 ha-685475 kubelet[1306]: E0924 18:47:19.303369    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203639303146026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:47:19 ha-685475 kubelet[1306]: E0924 18:47:19.303405    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203639303146026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-685475 -n ha-685475
helpers_test.go:261: (dbg) Run:  kubectl --context ha-685475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0924 18:47:24.266577   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.39081835s)
ha_test.go:413: expected profile "ha-685475" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-685475\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-685475\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-685475\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.7\",\"Port\":8443,\"Kubern
etesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.17\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.84\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.123\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"me
tallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":2
62144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-685475 -n ha-685475
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-685475 logs -n 25: (1.316869046s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile399016322/001/cp-test_ha-685475-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475:/home/docker/cp-test_ha-685475-m03_ha-685475.txt                      |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475 sudo cat                                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /home/docker/cp-test_ha-685475-m03_ha-685475.txt                                |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m02:/home/docker/cp-test_ha-685475-m03_ha-685475-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m02 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /home/docker/cp-test_ha-685475-m03_ha-685475-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m04:/home/docker/cp-test_ha-685475-m03_ha-685475-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m04 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /home/docker/cp-test_ha-685475-m03_ha-685475-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-685475 cp testdata/cp-test.txt                                               | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile399016322/001/cp-test_ha-685475-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475:/home/docker/cp-test_ha-685475-m04_ha-685475.txt                      |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475 sudo cat                                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-685475-m04_ha-685475.txt                                |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m02:/home/docker/cp-test_ha-685475-m04_ha-685475-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m02 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-685475-m04_ha-685475-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m03:/home/docker/cp-test_ha-685475-m04_ha-685475-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m03 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-685475-m04_ha-685475-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-685475 node stop m02 -v=7                                                    | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:40:35
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:40:35.618652   22837 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:40:35.618943   22837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:40:35.618954   22837 out.go:358] Setting ErrFile to fd 2...
	I0924 18:40:35.618959   22837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:40:35.619154   22837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 18:40:35.619730   22837 out.go:352] Setting JSON to false
	I0924 18:40:35.620645   22837 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1387,"bootTime":1727201849,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:40:35.620729   22837 start.go:139] virtualization: kvm guest
	I0924 18:40:35.622855   22837 out.go:177] * [ha-685475] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 18:40:35.624385   22837 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:40:35.624401   22837 notify.go:220] Checking for updates...
	I0924 18:40:35.627290   22837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:40:35.628609   22837 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:40:35.629977   22837 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:40:35.631349   22837 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 18:40:35.632638   22837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:40:35.634090   22837 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:40:35.670308   22837 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 18:40:35.671877   22837 start.go:297] selected driver: kvm2
	I0924 18:40:35.671905   22837 start.go:901] validating driver "kvm2" against <nil>
	I0924 18:40:35.671922   22837 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:40:35.672818   22837 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:40:35.672911   22837 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 18:40:35.688646   22837 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 18:40:35.688691   22837 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 18:40:35.688908   22837 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:40:35.688933   22837 cni.go:84] Creating CNI manager for ""
	I0924 18:40:35.688955   22837 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0924 18:40:35.688963   22837 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0924 18:40:35.689004   22837 start.go:340] cluster config:
	{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0924 18:40:35.689084   22837 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:40:35.691077   22837 out.go:177] * Starting "ha-685475" primary control-plane node in "ha-685475" cluster
	I0924 18:40:35.692675   22837 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:40:35.692727   22837 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 18:40:35.692737   22837 cache.go:56] Caching tarball of preloaded images
	I0924 18:40:35.692807   22837 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 18:40:35.692817   22837 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 18:40:35.693129   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:40:35.693148   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json: {Name:mkf04021428036cd37ddc8fca7772aaba780fa7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:40:35.693278   22837 start.go:360] acquireMachinesLock for ha-685475: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 18:40:35.693307   22837 start.go:364] duration metric: took 16.26µs to acquireMachinesLock for "ha-685475"
	I0924 18:40:35.693323   22837 start.go:93] Provisioning new machine with config: &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:40:35.693388   22837 start.go:125] createHost starting for "" (driver="kvm2")
	I0924 18:40:35.695217   22837 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 18:40:35.695377   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:40:35.695407   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:40:35.709830   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I0924 18:40:35.710273   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:40:35.710759   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:40:35.710782   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:40:35.711106   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:40:35.711266   22837 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:40:35.711382   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:40:35.711548   22837 start.go:159] libmachine.API.Create for "ha-685475" (driver="kvm2")
	I0924 18:40:35.711571   22837 client.go:168] LocalClient.Create starting
	I0924 18:40:35.711598   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem
	I0924 18:40:35.711635   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:40:35.711648   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:40:35.711694   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem
	I0924 18:40:35.711713   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:40:35.711724   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:40:35.711739   22837 main.go:141] libmachine: Running pre-create checks...
	I0924 18:40:35.711747   22837 main.go:141] libmachine: (ha-685475) Calling .PreCreateCheck
	I0924 18:40:35.712023   22837 main.go:141] libmachine: (ha-685475) Calling .GetConfigRaw
	I0924 18:40:35.712397   22837 main.go:141] libmachine: Creating machine...
	I0924 18:40:35.712411   22837 main.go:141] libmachine: (ha-685475) Calling .Create
	I0924 18:40:35.712547   22837 main.go:141] libmachine: (ha-685475) Creating KVM machine...
	I0924 18:40:35.713673   22837 main.go:141] libmachine: (ha-685475) DBG | found existing default KVM network
	I0924 18:40:35.714359   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:35.714247   22860 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000121a50}
	I0924 18:40:35.714400   22837 main.go:141] libmachine: (ha-685475) DBG | created network xml: 
	I0924 18:40:35.714421   22837 main.go:141] libmachine: (ha-685475) DBG | <network>
	I0924 18:40:35.714434   22837 main.go:141] libmachine: (ha-685475) DBG |   <name>mk-ha-685475</name>
	I0924 18:40:35.714443   22837 main.go:141] libmachine: (ha-685475) DBG |   <dns enable='no'/>
	I0924 18:40:35.714462   22837 main.go:141] libmachine: (ha-685475) DBG |   
	I0924 18:40:35.714493   22837 main.go:141] libmachine: (ha-685475) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0924 18:40:35.714508   22837 main.go:141] libmachine: (ha-685475) DBG |     <dhcp>
	I0924 18:40:35.714524   22837 main.go:141] libmachine: (ha-685475) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0924 18:40:35.714536   22837 main.go:141] libmachine: (ha-685475) DBG |     </dhcp>
	I0924 18:40:35.714545   22837 main.go:141] libmachine: (ha-685475) DBG |   </ip>
	I0924 18:40:35.714555   22837 main.go:141] libmachine: (ha-685475) DBG |   
	I0924 18:40:35.714563   22837 main.go:141] libmachine: (ha-685475) DBG | </network>
	I0924 18:40:35.714575   22837 main.go:141] libmachine: (ha-685475) DBG | 
	I0924 18:40:35.719712   22837 main.go:141] libmachine: (ha-685475) DBG | trying to create private KVM network mk-ha-685475 192.168.39.0/24...
	I0924 18:40:35.786088   22837 main.go:141] libmachine: (ha-685475) DBG | private KVM network mk-ha-685475 192.168.39.0/24 created
	I0924 18:40:35.786128   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:35.786012   22860 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:40:35.786138   22837 main.go:141] libmachine: (ha-685475) Setting up store path in /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475 ...
	I0924 18:40:35.786155   22837 main.go:141] libmachine: (ha-685475) Building disk image from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 18:40:35.786173   22837 main.go:141] libmachine: (ha-685475) Downloading /home/jenkins/minikube-integration/19700-3751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 18:40:36.040941   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:36.040806   22860 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa...
	I0924 18:40:36.268625   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:36.268496   22860 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/ha-685475.rawdisk...
	I0924 18:40:36.268672   22837 main.go:141] libmachine: (ha-685475) DBG | Writing magic tar header
	I0924 18:40:36.268724   22837 main.go:141] libmachine: (ha-685475) DBG | Writing SSH key tar header
	I0924 18:40:36.268756   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:36.268615   22860 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475 ...
	I0924 18:40:36.268769   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475 (perms=drwx------)
	I0924 18:40:36.268781   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines (perms=drwxr-xr-x)
	I0924 18:40:36.268787   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube (perms=drwxr-xr-x)
	I0924 18:40:36.268796   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751 (perms=drwxrwxr-x)
	I0924 18:40:36.268804   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 18:40:36.268835   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475
	I0924 18:40:36.268855   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines
	I0924 18:40:36.268865   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 18:40:36.268883   22837 main.go:141] libmachine: (ha-685475) Creating domain...
	I0924 18:40:36.268895   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:40:36.268900   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751
	I0924 18:40:36.268908   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 18:40:36.268917   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins
	I0924 18:40:36.268929   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home
	I0924 18:40:36.268937   22837 main.go:141] libmachine: (ha-685475) DBG | Skipping /home - not owner
	I0924 18:40:36.269970   22837 main.go:141] libmachine: (ha-685475) define libvirt domain using xml: 
	I0924 18:40:36.270004   22837 main.go:141] libmachine: (ha-685475) <domain type='kvm'>
	I0924 18:40:36.270014   22837 main.go:141] libmachine: (ha-685475)   <name>ha-685475</name>
	I0924 18:40:36.270022   22837 main.go:141] libmachine: (ha-685475)   <memory unit='MiB'>2200</memory>
	I0924 18:40:36.270031   22837 main.go:141] libmachine: (ha-685475)   <vcpu>2</vcpu>
	I0924 18:40:36.270041   22837 main.go:141] libmachine: (ha-685475)   <features>
	I0924 18:40:36.270049   22837 main.go:141] libmachine: (ha-685475)     <acpi/>
	I0924 18:40:36.270059   22837 main.go:141] libmachine: (ha-685475)     <apic/>
	I0924 18:40:36.270084   22837 main.go:141] libmachine: (ha-685475)     <pae/>
	I0924 18:40:36.270105   22837 main.go:141] libmachine: (ha-685475)     
	I0924 18:40:36.270115   22837 main.go:141] libmachine: (ha-685475)   </features>
	I0924 18:40:36.270123   22837 main.go:141] libmachine: (ha-685475)   <cpu mode='host-passthrough'>
	I0924 18:40:36.270131   22837 main.go:141] libmachine: (ha-685475)   
	I0924 18:40:36.270135   22837 main.go:141] libmachine: (ha-685475)   </cpu>
	I0924 18:40:36.270139   22837 main.go:141] libmachine: (ha-685475)   <os>
	I0924 18:40:36.270143   22837 main.go:141] libmachine: (ha-685475)     <type>hvm</type>
	I0924 18:40:36.270148   22837 main.go:141] libmachine: (ha-685475)     <boot dev='cdrom'/>
	I0924 18:40:36.270152   22837 main.go:141] libmachine: (ha-685475)     <boot dev='hd'/>
	I0924 18:40:36.270157   22837 main.go:141] libmachine: (ha-685475)     <bootmenu enable='no'/>
	I0924 18:40:36.270162   22837 main.go:141] libmachine: (ha-685475)   </os>
	I0924 18:40:36.270168   22837 main.go:141] libmachine: (ha-685475)   <devices>
	I0924 18:40:36.270179   22837 main.go:141] libmachine: (ha-685475)     <disk type='file' device='cdrom'>
	I0924 18:40:36.270191   22837 main.go:141] libmachine: (ha-685475)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/boot2docker.iso'/>
	I0924 18:40:36.270215   22837 main.go:141] libmachine: (ha-685475)       <target dev='hdc' bus='scsi'/>
	I0924 18:40:36.270223   22837 main.go:141] libmachine: (ha-685475)       <readonly/>
	I0924 18:40:36.270227   22837 main.go:141] libmachine: (ha-685475)     </disk>
	I0924 18:40:36.270232   22837 main.go:141] libmachine: (ha-685475)     <disk type='file' device='disk'>
	I0924 18:40:36.270240   22837 main.go:141] libmachine: (ha-685475)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 18:40:36.270255   22837 main.go:141] libmachine: (ha-685475)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/ha-685475.rawdisk'/>
	I0924 18:40:36.270268   22837 main.go:141] libmachine: (ha-685475)       <target dev='hda' bus='virtio'/>
	I0924 18:40:36.270285   22837 main.go:141] libmachine: (ha-685475)     </disk>
	I0924 18:40:36.270298   22837 main.go:141] libmachine: (ha-685475)     <interface type='network'>
	I0924 18:40:36.270315   22837 main.go:141] libmachine: (ha-685475)       <source network='mk-ha-685475'/>
	I0924 18:40:36.270332   22837 main.go:141] libmachine: (ha-685475)       <model type='virtio'/>
	I0924 18:40:36.270343   22837 main.go:141] libmachine: (ha-685475)     </interface>
	I0924 18:40:36.270354   22837 main.go:141] libmachine: (ha-685475)     <interface type='network'>
	I0924 18:40:36.270365   22837 main.go:141] libmachine: (ha-685475)       <source network='default'/>
	I0924 18:40:36.270375   22837 main.go:141] libmachine: (ha-685475)       <model type='virtio'/>
	I0924 18:40:36.270384   22837 main.go:141] libmachine: (ha-685475)     </interface>
	I0924 18:40:36.270394   22837 main.go:141] libmachine: (ha-685475)     <serial type='pty'>
	I0924 18:40:36.270402   22837 main.go:141] libmachine: (ha-685475)       <target port='0'/>
	I0924 18:40:36.270412   22837 main.go:141] libmachine: (ha-685475)     </serial>
	I0924 18:40:36.270421   22837 main.go:141] libmachine: (ha-685475)     <console type='pty'>
	I0924 18:40:36.270430   22837 main.go:141] libmachine: (ha-685475)       <target type='serial' port='0'/>
	I0924 18:40:36.270438   22837 main.go:141] libmachine: (ha-685475)     </console>
	I0924 18:40:36.270445   22837 main.go:141] libmachine: (ha-685475)     <rng model='virtio'>
	I0924 18:40:36.270455   22837 main.go:141] libmachine: (ha-685475)       <backend model='random'>/dev/random</backend>
	I0924 18:40:36.270471   22837 main.go:141] libmachine: (ha-685475)     </rng>
	I0924 18:40:36.270484   22837 main.go:141] libmachine: (ha-685475)     
	I0924 18:40:36.270496   22837 main.go:141] libmachine: (ha-685475)     
	I0924 18:40:36.270507   22837 main.go:141] libmachine: (ha-685475)   </devices>
	I0924 18:40:36.270515   22837 main.go:141] libmachine: (ha-685475) </domain>
	I0924 18:40:36.270524   22837 main.go:141] libmachine: (ha-685475) 
	I0924 18:40:36.274620   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:29:bb:c5 in network default
	I0924 18:40:36.275145   22837 main.go:141] libmachine: (ha-685475) Ensuring networks are active...
	I0924 18:40:36.275164   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:36.275867   22837 main.go:141] libmachine: (ha-685475) Ensuring network default is active
	I0924 18:40:36.276239   22837 main.go:141] libmachine: (ha-685475) Ensuring network mk-ha-685475 is active
	I0924 18:40:36.276892   22837 main.go:141] libmachine: (ha-685475) Getting domain xml...
	I0924 18:40:36.277603   22837 main.go:141] libmachine: (ha-685475) Creating domain...
	I0924 18:40:37.460480   22837 main.go:141] libmachine: (ha-685475) Waiting to get IP...
	I0924 18:40:37.461314   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:37.461739   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:37.461774   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:37.461717   22860 retry.go:31] will retry after 296.388363ms: waiting for machine to come up
	I0924 18:40:37.760304   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:37.760785   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:37.760810   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:37.760740   22860 retry.go:31] will retry after 328.765263ms: waiting for machine to come up
	I0924 18:40:38.091364   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:38.091840   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:38.091866   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:38.091794   22860 retry.go:31] will retry after 475.786926ms: waiting for machine to come up
	I0924 18:40:38.569463   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:38.569893   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:38.569921   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:38.569836   22860 retry.go:31] will retry after 449.224473ms: waiting for machine to come up
	I0924 18:40:39.020465   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:39.020861   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:39.020885   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:39.020825   22860 retry.go:31] will retry after 573.37705ms: waiting for machine to come up
	I0924 18:40:39.595466   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:39.595901   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:39.595920   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:39.595866   22860 retry.go:31] will retry after 888.819714ms: waiting for machine to come up
	I0924 18:40:40.485857   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:40.486194   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:40.486220   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:40.486169   22860 retry.go:31] will retry after 849.565748ms: waiting for machine to come up
	I0924 18:40:41.336920   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:41.337334   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:41.337355   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:41.337299   22860 retry.go:31] will retry after 943.088304ms: waiting for machine to come up
	I0924 18:40:42.282339   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:42.282747   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:42.282769   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:42.282704   22860 retry.go:31] will retry after 1.602523393s: waiting for machine to come up
	I0924 18:40:43.887465   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:43.887909   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:43.887926   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:43.887863   22860 retry.go:31] will retry after 1.565249639s: waiting for machine to come up
	I0924 18:40:45.455849   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:45.456357   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:45.456383   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:45.456304   22860 retry.go:31] will retry after 2.532618475s: waiting for machine to come up
	I0924 18:40:47.991803   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:47.992180   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:47.992208   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:47.992135   22860 retry.go:31] will retry after 2.721738632s: waiting for machine to come up
	I0924 18:40:50.715276   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:50.715664   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:50.715696   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:50.715634   22860 retry.go:31] will retry after 2.97095557s: waiting for machine to come up
	I0924 18:40:53.689583   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:53.689985   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:53.690027   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:53.689963   22860 retry.go:31] will retry after 4.964736548s: waiting for machine to come up
	I0924 18:40:58.657846   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:58.658217   22837 main.go:141] libmachine: (ha-685475) Found IP for machine: 192.168.39.7
	I0924 18:40:58.658231   22837 main.go:141] libmachine: (ha-685475) Reserving static IP address...
	I0924 18:40:58.658245   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has current primary IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:58.658686   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find host DHCP lease matching {name: "ha-685475", mac: "52:54:00:bb:26:52", ip: "192.168.39.7"} in network mk-ha-685475
	I0924 18:40:58.726895   22837 main.go:141] libmachine: (ha-685475) DBG | Getting to WaitForSSH function...
	I0924 18:40:58.726926   22837 main.go:141] libmachine: (ha-685475) Reserved static IP address: 192.168.39.7
	I0924 18:40:58.726937   22837 main.go:141] libmachine: (ha-685475) Waiting for SSH to be available...
	I0924 18:40:58.729433   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:58.729749   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475
	I0924 18:40:58.729778   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find defined IP address of network mk-ha-685475 interface with MAC address 52:54:00:bb:26:52
	I0924 18:40:58.729916   22837 main.go:141] libmachine: (ha-685475) DBG | Using SSH client type: external
	I0924 18:40:58.729941   22837 main.go:141] libmachine: (ha-685475) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa (-rw-------)
	I0924 18:40:58.729969   22837 main.go:141] libmachine: (ha-685475) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:40:58.729980   22837 main.go:141] libmachine: (ha-685475) DBG | About to run SSH command:
	I0924 18:40:58.729993   22837 main.go:141] libmachine: (ha-685475) DBG | exit 0
	I0924 18:40:58.733379   22837 main.go:141] libmachine: (ha-685475) DBG | SSH cmd err, output: exit status 255: 
	I0924 18:40:58.733402   22837 main.go:141] libmachine: (ha-685475) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0924 18:40:58.733413   22837 main.go:141] libmachine: (ha-685475) DBG | command : exit 0
	I0924 18:40:58.733422   22837 main.go:141] libmachine: (ha-685475) DBG | err     : exit status 255
	I0924 18:40:58.733432   22837 main.go:141] libmachine: (ha-685475) DBG | output  : 
	I0924 18:41:01.734078   22837 main.go:141] libmachine: (ha-685475) DBG | Getting to WaitForSSH function...
	I0924 18:41:01.736442   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.736846   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:01.736875   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.736966   22837 main.go:141] libmachine: (ha-685475) DBG | Using SSH client type: external
	I0924 18:41:01.736988   22837 main.go:141] libmachine: (ha-685475) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa (-rw-------)
	I0924 18:41:01.737029   22837 main.go:141] libmachine: (ha-685475) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:41:01.737052   22837 main.go:141] libmachine: (ha-685475) DBG | About to run SSH command:
	I0924 18:41:01.737065   22837 main.go:141] libmachine: (ha-685475) DBG | exit 0
	I0924 18:41:01.858518   22837 main.go:141] libmachine: (ha-685475) DBG | SSH cmd err, output: <nil>: 
	I0924 18:41:01.858812   22837 main.go:141] libmachine: (ha-685475) KVM machine creation complete!
	I0924 18:41:01.859085   22837 main.go:141] libmachine: (ha-685475) Calling .GetConfigRaw
	I0924 18:41:01.859647   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:01.859818   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:01.859970   22837 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 18:41:01.859985   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:41:01.861184   22837 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 18:41:01.861196   22837 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 18:41:01.861201   22837 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 18:41:01.861206   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:01.863734   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.864111   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:01.864137   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.864287   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:01.864470   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:01.864641   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:01.864792   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:01.864958   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:01.865168   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:01.865180   22837 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 18:41:01.965971   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:41:01.965992   22837 main.go:141] libmachine: Detecting the provisioner...
	I0924 18:41:01.965999   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:01.968393   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.968679   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:01.968705   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.968849   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:01.968989   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:01.969127   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:01.969226   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:01.969360   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:01.969511   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:01.969521   22837 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 18:41:02.070902   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 18:41:02.070990   22837 main.go:141] libmachine: found compatible host: buildroot
	I0924 18:41:02.071004   22837 main.go:141] libmachine: Provisioning with buildroot...
	I0924 18:41:02.071015   22837 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:41:02.071246   22837 buildroot.go:166] provisioning hostname "ha-685475"
	I0924 18:41:02.071275   22837 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:41:02.071415   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.074599   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.074996   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.075019   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.075149   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:02.075311   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.075419   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.075520   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:02.075644   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:02.075797   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:02.075808   22837 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-685475 && echo "ha-685475" | sudo tee /etc/hostname
	I0924 18:41:02.191183   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685475
	
	I0924 18:41:02.191206   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.193903   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.194254   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.194277   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.194435   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:02.194612   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.194742   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.194863   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:02.195018   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:02.195214   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:02.195234   22837 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-685475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-685475/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-685475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 18:41:02.306707   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:41:02.306732   22837 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 18:41:02.306752   22837 buildroot.go:174] setting up certificates
	I0924 18:41:02.306763   22837 provision.go:84] configureAuth start
	I0924 18:41:02.306771   22837 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:41:02.307067   22837 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:41:02.309510   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.309793   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.309820   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.309932   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.311757   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.312020   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.312040   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.312160   22837 provision.go:143] copyHostCerts
	I0924 18:41:02.312182   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:41:02.312213   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 18:41:02.312221   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:41:02.312284   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 18:41:02.312357   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:41:02.312374   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 18:41:02.312380   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:41:02.312403   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 18:41:02.312444   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:41:02.312461   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 18:41:02.312467   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:41:02.312487   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 18:41:02.312532   22837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.ha-685475 san=[127.0.0.1 192.168.39.7 ha-685475 localhost minikube]
	I0924 18:41:02.610752   22837 provision.go:177] copyRemoteCerts
	I0924 18:41:02.610810   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 18:41:02.610847   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.613269   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.613544   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.613580   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.613691   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:02.613856   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.614031   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:02.614140   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:02.696690   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 18:41:02.696775   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0924 18:41:02.719028   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 18:41:02.719087   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 18:41:02.740811   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 18:41:02.740889   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 18:41:02.762904   22837 provision.go:87] duration metric: took 456.128009ms to configureAuth
	I0924 18:41:02.762937   22837 buildroot.go:189] setting minikube options for container-runtime
	I0924 18:41:02.763113   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:41:02.763199   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.765836   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.766227   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.766253   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.766382   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:02.766616   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.766752   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.766881   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:02.767012   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:02.767181   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:02.767201   22837 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 18:41:02.983298   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 18:41:02.983327   22837 main.go:141] libmachine: Checking connection to Docker...
	I0924 18:41:02.983336   22837 main.go:141] libmachine: (ha-685475) Calling .GetURL
	I0924 18:41:02.984661   22837 main.go:141] libmachine: (ha-685475) DBG | Using libvirt version 6000000
	I0924 18:41:02.986674   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.986998   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.987035   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.987171   22837 main.go:141] libmachine: Docker is up and running!
	I0924 18:41:02.987184   22837 main.go:141] libmachine: Reticulating splines...
	I0924 18:41:02.987191   22837 client.go:171] duration metric: took 27.275613308s to LocalClient.Create
	I0924 18:41:02.987217   22837 start.go:167] duration metric: took 27.275670931s to libmachine.API.Create "ha-685475"
	I0924 18:41:02.987229   22837 start.go:293] postStartSetup for "ha-685475" (driver="kvm2")
	I0924 18:41:02.987244   22837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:41:02.987264   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:02.987513   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:41:02.987534   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.989371   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.989734   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.989749   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.989938   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:02.990114   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.990358   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:02.990533   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:03.072587   22837 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 18:41:03.076584   22837 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 18:41:03.076617   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 18:41:03.076688   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 18:41:03.076760   22837 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 18:41:03.076772   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /etc/ssl/certs/109492.pem
	I0924 18:41:03.076869   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 18:41:03.085953   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:41:03.108631   22837 start.go:296] duration metric: took 121.38524ms for postStartSetup
	I0924 18:41:03.108689   22837 main.go:141] libmachine: (ha-685475) Calling .GetConfigRaw
	I0924 18:41:03.109239   22837 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:41:03.111776   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.112078   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:03.112107   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.112319   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:41:03.112501   22837 start.go:128] duration metric: took 27.419103166s to createHost
	I0924 18:41:03.112522   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:03.114886   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.115236   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:03.115261   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.115422   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:03.115597   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:03.115736   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:03.115880   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:03.116026   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:03.116220   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:03.116230   22837 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 18:41:03.223401   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727203263.206629374
	
	I0924 18:41:03.223425   22837 fix.go:216] guest clock: 1727203263.206629374
	I0924 18:41:03.223432   22837 fix.go:229] Guest: 2024-09-24 18:41:03.206629374 +0000 UTC Remote: 2024-09-24 18:41:03.112512755 +0000 UTC m=+27.526898013 (delta=94.116619ms)
	I0924 18:41:03.223470   22837 fix.go:200] guest clock delta is within tolerance: 94.116619ms
	I0924 18:41:03.223475   22837 start.go:83] releasing machines lock for "ha-685475", held for 27.53015951s
	I0924 18:41:03.223493   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:03.223794   22837 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:41:03.226346   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.226711   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:03.226738   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.226887   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:03.227337   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:03.227484   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:03.227576   22837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 18:41:03.227627   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:03.227700   22837 ssh_runner.go:195] Run: cat /version.json
	I0924 18:41:03.227725   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:03.230122   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.230442   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:03.230467   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.230533   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.230587   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:03.230756   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:03.230907   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:03.230941   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:03.230962   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.231017   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:03.231113   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:03.231229   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:03.231324   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:03.231424   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:03.307645   22837 ssh_runner.go:195] Run: systemctl --version
	I0924 18:41:03.331733   22837 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 18:41:03.485763   22837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 18:41:03.491914   22837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 18:41:03.491985   22837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:41:03.507429   22837 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 18:41:03.507461   22837 start.go:495] detecting cgroup driver to use...
	I0924 18:41:03.507517   22837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 18:41:03.523186   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 18:41:03.536999   22837 docker.go:217] disabling cri-docker service (if available) ...
	I0924 18:41:03.537069   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 18:41:03.550683   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 18:41:03.564455   22837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 18:41:03.675808   22837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 18:41:03.815291   22837 docker.go:233] disabling docker service ...
	I0924 18:41:03.815369   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 18:41:03.829457   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 18:41:03.842075   22837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 18:41:03.968977   22837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 18:41:04.100834   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 18:41:04.114151   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:41:04.131432   22837 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 18:41:04.131492   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.141141   22837 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 18:41:04.141212   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.150778   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.160259   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.169851   22837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:41:04.179488   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.189760   22837 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.206045   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.215615   22837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:41:04.224420   22837 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 18:41:04.224481   22837 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 18:41:04.237154   22837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:41:04.245941   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:41:04.372069   22837 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 18:41:04.462010   22837 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 18:41:04.462086   22837 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 18:41:04.466695   22837 start.go:563] Will wait 60s for crictl version
	I0924 18:41:04.466753   22837 ssh_runner.go:195] Run: which crictl
	I0924 18:41:04.470287   22837 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 18:41:04.509294   22837 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 18:41:04.509389   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:41:04.538739   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:41:04.567366   22837 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 18:41:04.568751   22837 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:41:04.571725   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:04.572167   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:04.572191   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:04.572415   22837 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 18:41:04.576247   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:41:04.588081   22837 kubeadm.go:883] updating cluster {Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 18:41:04.588171   22837 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:41:04.588210   22837 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:41:04.618331   22837 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 18:41:04.618391   22837 ssh_runner.go:195] Run: which lz4
	I0924 18:41:04.622176   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0924 18:41:04.622306   22837 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 18:41:04.626507   22837 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 18:41:04.626538   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 18:41:05.822721   22837 crio.go:462] duration metric: took 1.200469004s to copy over tarball
	I0924 18:41:05.822802   22837 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 18:41:07.793883   22837 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.971051538s)
	I0924 18:41:07.793914   22837 crio.go:469] duration metric: took 1.971161974s to extract the tarball
	I0924 18:41:07.793928   22837 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 18:41:07.830067   22837 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:41:07.873646   22837 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 18:41:07.873666   22837 cache_images.go:84] Images are preloaded, skipping loading
	I0924 18:41:07.873673   22837 kubeadm.go:934] updating node { 192.168.39.7 8443 v1.31.1 crio true true} ...
	I0924 18:41:07.873776   22837 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-685475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 18:41:07.873869   22837 ssh_runner.go:195] Run: crio config
	I0924 18:41:07.919600   22837 cni.go:84] Creating CNI manager for ""
	I0924 18:41:07.919618   22837 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0924 18:41:07.919627   22837 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 18:41:07.919646   22837 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-685475 NodeName:ha-685475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 18:41:07.919771   22837 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-685475"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 18:41:07.919801   22837 kube-vip.go:115] generating kube-vip config ...
	I0924 18:41:07.919842   22837 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 18:41:07.935217   22837 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 18:41:07.935310   22837 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0924 18:41:07.935358   22837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:41:07.945016   22837 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 18:41:07.945087   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0924 18:41:07.954390   22837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0924 18:41:07.970734   22837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:41:07.986979   22837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0924 18:41:08.003862   22837 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0924 18:41:08.020369   22837 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 18:41:08.024317   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:41:08.036613   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:41:08.156453   22837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:41:08.174003   22837 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475 for IP: 192.168.39.7
	I0924 18:41:08.174027   22837 certs.go:194] generating shared ca certs ...
	I0924 18:41:08.174053   22837 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.174225   22837 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 18:41:08.174336   22837 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 18:41:08.174354   22837 certs.go:256] generating profile certs ...
	I0924 18:41:08.174424   22837 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key
	I0924 18:41:08.174441   22837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt with IP's: []
	I0924 18:41:08.287248   22837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt ...
	I0924 18:41:08.287273   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt: {Name:mkaceb17faeee44eeb1f13a92453dd9237d1455b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.287463   22837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key ...
	I0924 18:41:08.287478   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key: {Name:mkbd762d73e102d20739c242c4dc875214afceba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.287585   22837 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.2dedd2ac
	I0924 18:41:08.287601   22837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.2dedd2ac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.254]
	I0924 18:41:08.420508   22837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.2dedd2ac ...
	I0924 18:41:08.420553   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.2dedd2ac: {Name:mk9b48c67c74aab074e9cdcef91880f465361f38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.420805   22837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.2dedd2ac ...
	I0924 18:41:08.420830   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.2dedd2ac: {Name:mk62b56ebe2e46561c15a5b3088127454fecceb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.420950   22837 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.2dedd2ac -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt
	I0924 18:41:08.421025   22837 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.2dedd2ac -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key
	I0924 18:41:08.421075   22837 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key
	I0924 18:41:08.421093   22837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt with IP's: []
	I0924 18:41:08.543472   22837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt ...
	I0924 18:41:08.543508   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt: {Name:mk21cf6990553b97f2812e699190b5a379943f0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.543691   22837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key ...
	I0924 18:41:08.543706   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key: {Name:mk47726c7ba1340c780d325e14f433f9d0586f15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.543805   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 18:41:08.543829   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 18:41:08.543844   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 18:41:08.543860   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 18:41:08.543879   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 18:41:08.543898   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 18:41:08.543917   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 18:41:08.543935   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 18:41:08.543997   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 18:41:08.544044   22837 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 18:41:08.544059   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 18:41:08.544094   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 18:41:08.544127   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:41:08.544158   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 18:41:08.544210   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:41:08.544249   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem -> /usr/share/ca-certificates/10949.pem
	I0924 18:41:08.544270   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /usr/share/ca-certificates/109492.pem
	I0924 18:41:08.544289   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:08.544858   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:41:08.570597   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 18:41:08.594223   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:41:08.617808   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 18:41:08.641632   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 18:41:08.665659   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 18:41:08.689661   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:41:08.713308   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 18:41:08.737197   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 18:41:08.762148   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 18:41:08.788186   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:41:08.813589   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 18:41:08.831743   22837 ssh_runner.go:195] Run: openssl version
	I0924 18:41:08.837364   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 18:41:08.849428   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 18:41:08.854475   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 18:41:08.854538   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 18:41:08.860154   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 18:41:08.871267   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:41:08.882296   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:08.886561   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:08.886625   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:08.892075   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:41:08.902853   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 18:41:08.913706   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 18:41:08.917998   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 18:41:08.918060   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 18:41:08.923875   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 18:41:08.937683   22837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:41:08.942083   22837 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 18:41:08.942144   22837 kubeadm.go:392] StartCluster: {Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:41:08.942205   22837 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 18:41:08.942246   22837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 18:41:08.996144   22837 cri.go:89] found id: ""
	I0924 18:41:08.996211   22837 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 18:41:09.006172   22837 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 18:41:09.015736   22837 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 18:41:09.025439   22837 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 18:41:09.025460   22837 kubeadm.go:157] found existing configuration files:
	
	I0924 18:41:09.025508   22837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 18:41:09.034746   22837 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 18:41:09.034800   22837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 18:41:09.044191   22837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 18:41:09.053192   22837 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 18:41:09.053253   22837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 18:41:09.062560   22837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 18:41:09.071543   22837 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 18:41:09.071616   22837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 18:41:09.080990   22837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 18:41:09.089937   22837 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 18:41:09.090011   22837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 18:41:09.099338   22837 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 18:41:09.200102   22837 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 18:41:09.200206   22837 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 18:41:09.288288   22837 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 18:41:09.288440   22837 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 18:41:09.288580   22837 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 18:41:09.299649   22837 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 18:41:09.414648   22837 out.go:235]   - Generating certificates and keys ...
	I0924 18:41:09.414792   22837 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 18:41:09.414929   22837 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 18:41:09.453019   22837 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 18:41:09.665252   22837 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 18:41:09.786773   22837 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 18:41:09.895285   22837 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 18:41:10.253463   22837 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 18:41:10.253620   22837 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-685475 localhost] and IPs [192.168.39.7 127.0.0.1 ::1]
	I0924 18:41:10.418238   22837 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 18:41:10.418481   22837 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-685475 localhost] and IPs [192.168.39.7 127.0.0.1 ::1]
	I0924 18:41:10.573281   22837 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 18:41:10.657693   22837 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 18:41:10.807528   22837 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 18:41:10.807638   22837 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 18:41:10.929209   22837 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 18:41:11.169941   22837 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 18:41:11.264501   22837 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 18:41:11.399230   22837 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 18:41:11.616228   22837 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 18:41:11.616627   22837 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 18:41:11.619943   22837 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 18:41:11.621650   22837 out.go:235]   - Booting up control plane ...
	I0924 18:41:11.621746   22837 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 18:41:11.621863   22837 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 18:41:11.621965   22837 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 18:41:11.642334   22837 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 18:41:11.648424   22837 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 18:41:11.648483   22837 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 18:41:11.789428   22837 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 18:41:11.789563   22837 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 18:41:12.790634   22837 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001755257s
	I0924 18:41:12.790735   22837 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 18:41:18.478058   22837 kubeadm.go:310] [api-check] The API server is healthy after 5.68964956s
	I0924 18:41:18.493860   22837 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 18:41:18.510122   22837 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 18:41:18.541786   22837 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 18:41:18.541987   22837 kubeadm.go:310] [mark-control-plane] Marking the node ha-685475 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 18:41:18.554344   22837 kubeadm.go:310] [bootstrap-token] Using token: 7i3lxo.hk68lojtv0dswhd7
	I0924 18:41:18.555710   22837 out.go:235]   - Configuring RBAC rules ...
	I0924 18:41:18.555857   22837 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 18:41:18.562776   22837 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 18:41:18.572835   22837 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 18:41:18.581420   22837 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 18:41:18.584989   22837 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 18:41:18.590727   22837 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 18:41:18.886783   22837 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 18:41:19.308273   22837 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 18:41:19.885351   22837 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 18:41:19.886864   22837 kubeadm.go:310] 
	I0924 18:41:19.886947   22837 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 18:41:19.886955   22837 kubeadm.go:310] 
	I0924 18:41:19.887084   22837 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 18:41:19.887110   22837 kubeadm.go:310] 
	I0924 18:41:19.887149   22837 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 18:41:19.887252   22837 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 18:41:19.887307   22837 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 18:41:19.887317   22837 kubeadm.go:310] 
	I0924 18:41:19.887400   22837 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 18:41:19.887409   22837 kubeadm.go:310] 
	I0924 18:41:19.887475   22837 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 18:41:19.887492   22837 kubeadm.go:310] 
	I0924 18:41:19.887567   22837 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 18:41:19.887670   22837 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 18:41:19.887778   22837 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 18:41:19.887818   22837 kubeadm.go:310] 
	I0924 18:41:19.887934   22837 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 18:41:19.888013   22837 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 18:41:19.888020   22837 kubeadm.go:310] 
	I0924 18:41:19.888111   22837 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7i3lxo.hk68lojtv0dswhd7 \
	I0924 18:41:19.888252   22837 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 18:41:19.888288   22837 kubeadm.go:310] 	--control-plane 
	I0924 18:41:19.888296   22837 kubeadm.go:310] 
	I0924 18:41:19.888373   22837 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 18:41:19.888384   22837 kubeadm.go:310] 
	I0924 18:41:19.888452   22837 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7i3lxo.hk68lojtv0dswhd7 \
	I0924 18:41:19.888539   22837 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 18:41:19.889407   22837 kubeadm.go:310] W0924 18:41:09.185692     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:41:19.889718   22837 kubeadm.go:310] W0924 18:41:09.186387     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:41:19.889856   22837 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 18:41:19.889883   22837 cni.go:84] Creating CNI manager for ""
	I0924 18:41:19.889890   22837 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0924 18:41:19.892313   22837 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0924 18:41:19.893563   22837 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0924 18:41:19.898820   22837 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0924 18:41:19.898856   22837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0924 18:41:19.916356   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0924 18:41:20.290022   22837 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 18:41:20.290096   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:20.290149   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-685475 minikube.k8s.io/updated_at=2024_09_24T18_41_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=ha-685475 minikube.k8s.io/primary=true
	I0924 18:41:20.340090   22837 ops.go:34] apiserver oom_adj: -16
	I0924 18:41:20.448075   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:20.948257   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:21.448755   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:21.948360   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:22.448489   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:22.948535   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:23.038503   22837 kubeadm.go:1113] duration metric: took 2.748466322s to wait for elevateKubeSystemPrivileges
	I0924 18:41:23.038543   22837 kubeadm.go:394] duration metric: took 14.096402684s to StartCluster
	I0924 18:41:23.038566   22837 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:23.038649   22837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:41:23.039313   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:23.039501   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0924 18:41:23.039502   22837 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:41:23.039576   22837 start.go:241] waiting for startup goroutines ...
	I0924 18:41:23.039526   22837 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 18:41:23.039598   22837 addons.go:69] Setting storage-provisioner=true in profile "ha-685475"
	I0924 18:41:23.039615   22837 addons.go:234] Setting addon storage-provisioner=true in "ha-685475"
	I0924 18:41:23.039616   22837 addons.go:69] Setting default-storageclass=true in profile "ha-685475"
	I0924 18:41:23.039640   22837 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-685475"
	I0924 18:41:23.039645   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:41:23.039696   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:41:23.040106   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.040124   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.040143   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.040155   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.054906   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41319
	I0924 18:41:23.055238   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35097
	I0924 18:41:23.055452   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.055608   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.055957   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.055986   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.056221   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.056245   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.056263   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.056409   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:41:23.056534   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.056961   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.056989   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.058582   22837 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:41:23.058812   22837 kapi.go:59] client config for ha-685475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key", CAFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 18:41:23.059257   22837 cert_rotation.go:140] Starting client certificate rotation controller
	I0924 18:41:23.059411   22837 addons.go:234] Setting addon default-storageclass=true in "ha-685475"
	I0924 18:41:23.059452   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:41:23.059725   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.059753   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.070908   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0924 18:41:23.071353   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.071899   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.071925   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.072270   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.072451   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:41:23.073858   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36557
	I0924 18:41:23.073870   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:23.074183   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.074573   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.074598   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.074991   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.075491   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.075531   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.075879   22837 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 18:41:23.077225   22837 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:41:23.077247   22837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 18:41:23.077265   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:23.079855   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:23.080215   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:23.080236   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:23.080425   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:23.080576   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:23.080722   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:23.080813   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:23.091212   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33839
	I0924 18:41:23.091717   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.092134   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.092151   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.092427   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.092615   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:41:23.094110   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:23.094306   22837 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 18:41:23.094320   22837 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 18:41:23.094337   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:23.097202   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:23.097634   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:23.097661   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:23.097807   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:23.097981   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:23.098125   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:23.098244   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:23.157451   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0924 18:41:23.219332   22837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:41:23.236503   22837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 18:41:23.513482   22837 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0924 18:41:23.780293   22837 main.go:141] libmachine: Making call to close driver server
	I0924 18:41:23.780320   22837 main.go:141] libmachine: (ha-685475) Calling .Close
	I0924 18:41:23.780368   22837 main.go:141] libmachine: Making call to close driver server
	I0924 18:41:23.780387   22837 main.go:141] libmachine: (ha-685475) Calling .Close
	I0924 18:41:23.780643   22837 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:41:23.780651   22837 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:41:23.780659   22837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:41:23.780662   22837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:41:23.780669   22837 main.go:141] libmachine: Making call to close driver server
	I0924 18:41:23.780671   22837 main.go:141] libmachine: Making call to close driver server
	I0924 18:41:23.780677   22837 main.go:141] libmachine: (ha-685475) Calling .Close
	I0924 18:41:23.780679   22837 main.go:141] libmachine: (ha-685475) Calling .Close
	I0924 18:41:23.780872   22837 main.go:141] libmachine: (ha-685475) DBG | Closing plugin on server side
	I0924 18:41:23.780906   22837 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:41:23.780911   22837 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:41:23.780919   22837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:41:23.780919   22837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:41:23.780967   22837 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0924 18:41:23.780985   22837 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0924 18:41:23.781073   22837 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0924 18:41:23.781083   22837 round_trippers.go:469] Request Headers:
	I0924 18:41:23.781093   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:41:23.781099   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:41:23.795500   22837 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0924 18:41:23.796218   22837 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0924 18:41:23.796237   22837 round_trippers.go:469] Request Headers:
	I0924 18:41:23.796248   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:41:23.796255   22837 round_trippers.go:473]     Content-Type: application/json
	I0924 18:41:23.796259   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:41:23.798194   22837 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0924 18:41:23.798350   22837 main.go:141] libmachine: Making call to close driver server
	I0924 18:41:23.798369   22837 main.go:141] libmachine: (ha-685475) Calling .Close
	I0924 18:41:23.798603   22837 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:41:23.798620   22837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:41:23.800167   22837 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0924 18:41:23.801238   22837 addons.go:510] duration metric: took 761.715981ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0924 18:41:23.801274   22837 start.go:246] waiting for cluster config update ...
	I0924 18:41:23.801288   22837 start.go:255] writing updated cluster config ...
	I0924 18:41:23.802705   22837 out.go:201] 
	I0924 18:41:23.804213   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:41:23.804273   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:41:23.806007   22837 out.go:177] * Starting "ha-685475-m02" control-plane node in "ha-685475" cluster
	I0924 18:41:23.807501   22837 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:41:23.807522   22837 cache.go:56] Caching tarball of preloaded images
	I0924 18:41:23.807605   22837 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 18:41:23.807617   22837 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 18:41:23.807680   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:41:23.807853   22837 start.go:360] acquireMachinesLock for ha-685475-m02: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 18:41:23.807905   22837 start.go:364] duration metric: took 31.255µs to acquireMachinesLock for "ha-685475-m02"
	I0924 18:41:23.807922   22837 start.go:93] Provisioning new machine with config: &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:41:23.808020   22837 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0924 18:41:23.809639   22837 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 18:41:23.809702   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.809724   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.823910   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39425
	I0924 18:41:23.824393   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.824838   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.824857   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.825193   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.825352   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetMachineName
	I0924 18:41:23.825501   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:23.825615   22837 start.go:159] libmachine.API.Create for "ha-685475" (driver="kvm2")
	I0924 18:41:23.825634   22837 client.go:168] LocalClient.Create starting
	I0924 18:41:23.825657   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem
	I0924 18:41:23.825684   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:41:23.825697   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:41:23.825743   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem
	I0924 18:41:23.825761   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:41:23.825771   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:41:23.825785   22837 main.go:141] libmachine: Running pre-create checks...
	I0924 18:41:23.825792   22837 main.go:141] libmachine: (ha-685475-m02) Calling .PreCreateCheck
	I0924 18:41:23.825960   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetConfigRaw
	I0924 18:41:23.826338   22837 main.go:141] libmachine: Creating machine...
	I0924 18:41:23.826355   22837 main.go:141] libmachine: (ha-685475-m02) Calling .Create
	I0924 18:41:23.826493   22837 main.go:141] libmachine: (ha-685475-m02) Creating KVM machine...
	I0924 18:41:23.827625   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found existing default KVM network
	I0924 18:41:23.827759   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found existing private KVM network mk-ha-685475
	I0924 18:41:23.827871   22837 main.go:141] libmachine: (ha-685475-m02) Setting up store path in /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02 ...
	I0924 18:41:23.827888   22837 main.go:141] libmachine: (ha-685475-m02) Building disk image from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 18:41:23.827966   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:23.827870   23203 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:41:23.828041   22837 main.go:141] libmachine: (ha-685475-m02) Downloading /home/jenkins/minikube-integration/19700-3751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 18:41:24.081911   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:24.081766   23203 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa...
	I0924 18:41:24.287254   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:24.287116   23203 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/ha-685475-m02.rawdisk...
	I0924 18:41:24.287289   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Writing magic tar header
	I0924 18:41:24.287303   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Writing SSH key tar header
	I0924 18:41:24.287322   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:24.287234   23203 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02 ...
	I0924 18:41:24.287343   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02
	I0924 18:41:24.287363   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02 (perms=drwx------)
	I0924 18:41:24.287376   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines
	I0924 18:41:24.287386   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines (perms=drwxr-xr-x)
	I0924 18:41:24.287429   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:41:24.287454   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube (perms=drwxr-xr-x)
	I0924 18:41:24.287465   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751
	I0924 18:41:24.287486   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751 (perms=drwxrwxr-x)
	I0924 18:41:24.287508   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 18:41:24.287521   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 18:41:24.287531   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins
	I0924 18:41:24.287541   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 18:41:24.287551   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home
	I0924 18:41:24.287560   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Skipping /home - not owner
	I0924 18:41:24.287570   22837 main.go:141] libmachine: (ha-685475-m02) Creating domain...
	I0924 18:41:24.288399   22837 main.go:141] libmachine: (ha-685475-m02) define libvirt domain using xml: 
	I0924 18:41:24.288421   22837 main.go:141] libmachine: (ha-685475-m02) <domain type='kvm'>
	I0924 18:41:24.288434   22837 main.go:141] libmachine: (ha-685475-m02)   <name>ha-685475-m02</name>
	I0924 18:41:24.288441   22837 main.go:141] libmachine: (ha-685475-m02)   <memory unit='MiB'>2200</memory>
	I0924 18:41:24.288467   22837 main.go:141] libmachine: (ha-685475-m02)   <vcpu>2</vcpu>
	I0924 18:41:24.288485   22837 main.go:141] libmachine: (ha-685475-m02)   <features>
	I0924 18:41:24.288491   22837 main.go:141] libmachine: (ha-685475-m02)     <acpi/>
	I0924 18:41:24.288498   22837 main.go:141] libmachine: (ha-685475-m02)     <apic/>
	I0924 18:41:24.288503   22837 main.go:141] libmachine: (ha-685475-m02)     <pae/>
	I0924 18:41:24.288510   22837 main.go:141] libmachine: (ha-685475-m02)     
	I0924 18:41:24.288517   22837 main.go:141] libmachine: (ha-685475-m02)   </features>
	I0924 18:41:24.288525   22837 main.go:141] libmachine: (ha-685475-m02)   <cpu mode='host-passthrough'>
	I0924 18:41:24.288550   22837 main.go:141] libmachine: (ha-685475-m02)   
	I0924 18:41:24.288565   22837 main.go:141] libmachine: (ha-685475-m02)   </cpu>
	I0924 18:41:24.288574   22837 main.go:141] libmachine: (ha-685475-m02)   <os>
	I0924 18:41:24.288586   22837 main.go:141] libmachine: (ha-685475-m02)     <type>hvm</type>
	I0924 18:41:24.288602   22837 main.go:141] libmachine: (ha-685475-m02)     <boot dev='cdrom'/>
	I0924 18:41:24.288616   22837 main.go:141] libmachine: (ha-685475-m02)     <boot dev='hd'/>
	I0924 18:41:24.288629   22837 main.go:141] libmachine: (ha-685475-m02)     <bootmenu enable='no'/>
	I0924 18:41:24.288636   22837 main.go:141] libmachine: (ha-685475-m02)   </os>
	I0924 18:41:24.288648   22837 main.go:141] libmachine: (ha-685475-m02)   <devices>
	I0924 18:41:24.288661   22837 main.go:141] libmachine: (ha-685475-m02)     <disk type='file' device='cdrom'>
	I0924 18:41:24.288679   22837 main.go:141] libmachine: (ha-685475-m02)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/boot2docker.iso'/>
	I0924 18:41:24.288689   22837 main.go:141] libmachine: (ha-685475-m02)       <target dev='hdc' bus='scsi'/>
	I0924 18:41:24.288695   22837 main.go:141] libmachine: (ha-685475-m02)       <readonly/>
	I0924 18:41:24.288703   22837 main.go:141] libmachine: (ha-685475-m02)     </disk>
	I0924 18:41:24.288712   22837 main.go:141] libmachine: (ha-685475-m02)     <disk type='file' device='disk'>
	I0924 18:41:24.288725   22837 main.go:141] libmachine: (ha-685475-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 18:41:24.288738   22837 main.go:141] libmachine: (ha-685475-m02)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/ha-685475-m02.rawdisk'/>
	I0924 18:41:24.288748   22837 main.go:141] libmachine: (ha-685475-m02)       <target dev='hda' bus='virtio'/>
	I0924 18:41:24.288756   22837 main.go:141] libmachine: (ha-685475-m02)     </disk>
	I0924 18:41:24.288767   22837 main.go:141] libmachine: (ha-685475-m02)     <interface type='network'>
	I0924 18:41:24.288778   22837 main.go:141] libmachine: (ha-685475-m02)       <source network='mk-ha-685475'/>
	I0924 18:41:24.288788   22837 main.go:141] libmachine: (ha-685475-m02)       <model type='virtio'/>
	I0924 18:41:24.288796   22837 main.go:141] libmachine: (ha-685475-m02)     </interface>
	I0924 18:41:24.288805   22837 main.go:141] libmachine: (ha-685475-m02)     <interface type='network'>
	I0924 18:41:24.288814   22837 main.go:141] libmachine: (ha-685475-m02)       <source network='default'/>
	I0924 18:41:24.288827   22837 main.go:141] libmachine: (ha-685475-m02)       <model type='virtio'/>
	I0924 18:41:24.288835   22837 main.go:141] libmachine: (ha-685475-m02)     </interface>
	I0924 18:41:24.288848   22837 main.go:141] libmachine: (ha-685475-m02)     <serial type='pty'>
	I0924 18:41:24.288862   22837 main.go:141] libmachine: (ha-685475-m02)       <target port='0'/>
	I0924 18:41:24.288876   22837 main.go:141] libmachine: (ha-685475-m02)     </serial>
	I0924 18:41:24.288885   22837 main.go:141] libmachine: (ha-685475-m02)     <console type='pty'>
	I0924 18:41:24.288892   22837 main.go:141] libmachine: (ha-685475-m02)       <target type='serial' port='0'/>
	I0924 18:41:24.288900   22837 main.go:141] libmachine: (ha-685475-m02)     </console>
	I0924 18:41:24.288911   22837 main.go:141] libmachine: (ha-685475-m02)     <rng model='virtio'>
	I0924 18:41:24.288922   22837 main.go:141] libmachine: (ha-685475-m02)       <backend model='random'>/dev/random</backend>
	I0924 18:41:24.288928   22837 main.go:141] libmachine: (ha-685475-m02)     </rng>
	I0924 18:41:24.288935   22837 main.go:141] libmachine: (ha-685475-m02)     
	I0924 18:41:24.288944   22837 main.go:141] libmachine: (ha-685475-m02)     
	I0924 18:41:24.288956   22837 main.go:141] libmachine: (ha-685475-m02)   </devices>
	I0924 18:41:24.288965   22837 main.go:141] libmachine: (ha-685475-m02) </domain>
	I0924 18:41:24.288975   22837 main.go:141] libmachine: (ha-685475-m02) 
	I0924 18:41:24.294992   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:bf:94:ad in network default
	I0924 18:41:24.295458   22837 main.go:141] libmachine: (ha-685475-m02) Ensuring networks are active...
	I0924 18:41:24.295479   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:24.296154   22837 main.go:141] libmachine: (ha-685475-m02) Ensuring network default is active
	I0924 18:41:24.296453   22837 main.go:141] libmachine: (ha-685475-m02) Ensuring network mk-ha-685475 is active
	I0924 18:41:24.296812   22837 main.go:141] libmachine: (ha-685475-m02) Getting domain xml...
	I0924 18:41:24.297403   22837 main.go:141] libmachine: (ha-685475-m02) Creating domain...
	I0924 18:41:25.511930   22837 main.go:141] libmachine: (ha-685475-m02) Waiting to get IP...
	I0924 18:41:25.512699   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:25.513104   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:25.513143   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:25.513091   23203 retry.go:31] will retry after 234.16067ms: waiting for machine to come up
	I0924 18:41:25.748453   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:25.748989   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:25.749022   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:25.748910   23203 retry.go:31] will retry after 253.354873ms: waiting for machine to come up
	I0924 18:41:26.004434   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:26.004963   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:26.004991   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:26.004930   23203 retry.go:31] will retry after 301.553898ms: waiting for machine to come up
	I0924 18:41:26.308451   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:26.308934   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:26.308961   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:26.308888   23203 retry.go:31] will retry after 500.936612ms: waiting for machine to come up
	I0924 18:41:26.811529   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:26.812030   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:26.812051   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:26.811979   23203 retry.go:31] will retry after 494.430185ms: waiting for machine to come up
	I0924 18:41:27.307617   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:27.308186   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:27.308222   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:27.308158   23203 retry.go:31] will retry after 624.183064ms: waiting for machine to come up
	I0924 18:41:27.933772   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:27.934215   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:27.934243   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:27.934171   23203 retry.go:31] will retry after 1.048717591s: waiting for machine to come up
	I0924 18:41:28.984256   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:28.984722   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:28.984750   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:28.984681   23203 retry.go:31] will retry after 1.344803754s: waiting for machine to come up
	I0924 18:41:30.331184   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:30.331665   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:30.331695   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:30.331611   23203 retry.go:31] will retry after 1.462041717s: waiting for machine to come up
	I0924 18:41:31.796038   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:31.796495   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:31.796521   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:31.796439   23203 retry.go:31] will retry after 1.946036169s: waiting for machine to come up
	I0924 18:41:33.743834   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:33.744264   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:33.744289   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:33.744229   23203 retry.go:31] will retry after 1.953552894s: waiting for machine to come up
	I0924 18:41:35.699784   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:35.700188   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:35.700207   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:35.700142   23203 retry.go:31] will retry after 3.550334074s: waiting for machine to come up
	I0924 18:41:39.251459   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:39.251859   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:39.251883   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:39.251819   23203 retry.go:31] will retry after 3.096214207s: waiting for machine to come up
	I0924 18:41:42.351720   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:42.352147   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:42.352168   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:42.352109   23203 retry.go:31] will retry after 5.133975311s: waiting for machine to come up
	I0924 18:41:47.489864   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.490368   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has current primary IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.490384   22837 main.go:141] libmachine: (ha-685475-m02) Found IP for machine: 192.168.39.17
	I0924 18:41:47.490392   22837 main.go:141] libmachine: (ha-685475-m02) Reserving static IP address...
	I0924 18:41:47.490898   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find host DHCP lease matching {name: "ha-685475-m02", mac: "52:54:00:c4:34:39", ip: "192.168.39.17"} in network mk-ha-685475
	I0924 18:41:47.562679   22837 main.go:141] libmachine: (ha-685475-m02) Reserved static IP address: 192.168.39.17
	I0924 18:41:47.562701   22837 main.go:141] libmachine: (ha-685475-m02) Waiting for SSH to be available...
	I0924 18:41:47.562710   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Getting to WaitForSSH function...
	I0924 18:41:47.565356   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.565738   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:47.565768   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.565964   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Using SSH client type: external
	I0924 18:41:47.565988   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa (-rw-------)
	I0924 18:41:47.566029   22837 main.go:141] libmachine: (ha-685475-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:41:47.566047   22837 main.go:141] libmachine: (ha-685475-m02) DBG | About to run SSH command:
	I0924 18:41:47.566064   22837 main.go:141] libmachine: (ha-685475-m02) DBG | exit 0
	I0924 18:41:47.686618   22837 main.go:141] libmachine: (ha-685475-m02) DBG | SSH cmd err, output: <nil>: 
	I0924 18:41:47.686909   22837 main.go:141] libmachine: (ha-685475-m02) KVM machine creation complete!
	I0924 18:41:47.687246   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetConfigRaw
	I0924 18:41:47.687732   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:47.687897   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:47.688053   22837 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 18:41:47.688065   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetState
	I0924 18:41:47.689263   22837 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 18:41:47.689278   22837 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 18:41:47.689283   22837 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 18:41:47.689288   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:47.691350   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.691620   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:47.691646   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.691809   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:47.691967   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.692084   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.692218   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:47.692337   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:47.692527   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:47.692540   22837 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 18:41:47.794027   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:41:47.794050   22837 main.go:141] libmachine: Detecting the provisioner...
	I0924 18:41:47.794060   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:47.796879   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.797224   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:47.797254   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.797407   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:47.797704   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.797913   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.798111   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:47.798287   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:47.798451   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:47.798462   22837 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 18:41:47.903254   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 18:41:47.903300   22837 main.go:141] libmachine: found compatible host: buildroot
	I0924 18:41:47.903305   22837 main.go:141] libmachine: Provisioning with buildroot...
	I0924 18:41:47.903313   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetMachineName
	I0924 18:41:47.903564   22837 buildroot.go:166] provisioning hostname "ha-685475-m02"
	I0924 18:41:47.903593   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetMachineName
	I0924 18:41:47.903777   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:47.906337   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.906672   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:47.906694   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.906854   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:47.907009   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.907154   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.907284   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:47.907446   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:47.907641   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:47.907655   22837 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-685475-m02 && echo "ha-685475-m02" | sudo tee /etc/hostname
	I0924 18:41:48.025784   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685475-m02
	
	I0924 18:41:48.025820   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.028558   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.028880   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.028907   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.029107   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.029274   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.029415   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.029559   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.029722   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:48.029915   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:48.029932   22837 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-685475-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-685475-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-685475-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 18:41:48.139194   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:41:48.139227   22837 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 18:41:48.139248   22837 buildroot.go:174] setting up certificates
	I0924 18:41:48.139267   22837 provision.go:84] configureAuth start
	I0924 18:41:48.139280   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetMachineName
	I0924 18:41:48.139566   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetIP
	I0924 18:41:48.142585   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.143024   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.143053   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.143201   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.145124   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.145481   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.145505   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.145654   22837 provision.go:143] copyHostCerts
	I0924 18:41:48.145692   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:41:48.145726   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 18:41:48.145735   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:41:48.145801   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 18:41:48.145869   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:41:48.145886   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 18:41:48.145891   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:41:48.145915   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 18:41:48.145955   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:41:48.145971   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 18:41:48.145977   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:41:48.145998   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 18:41:48.146040   22837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.ha-685475-m02 san=[127.0.0.1 192.168.39.17 ha-685475-m02 localhost minikube]
	I0924 18:41:48.245573   22837 provision.go:177] copyRemoteCerts
	I0924 18:41:48.245622   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 18:41:48.245643   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.248802   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.249274   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.249306   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.249504   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.249706   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.249847   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.249994   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa Username:docker}
	I0924 18:41:48.328761   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 18:41:48.328834   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 18:41:48.362627   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 18:41:48.362710   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 18:41:48.384868   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 18:41:48.384964   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 18:41:48.408148   22837 provision.go:87] duration metric: took 268.869175ms to configureAuth
	I0924 18:41:48.408177   22837 buildroot.go:189] setting minikube options for container-runtime
	I0924 18:41:48.408340   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:41:48.408409   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.410657   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.411048   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.411073   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.411241   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.411430   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.411632   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.411784   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.411937   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:48.412089   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:48.412102   22837 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 18:41:48.621639   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 18:41:48.621659   22837 main.go:141] libmachine: Checking connection to Docker...
	I0924 18:41:48.621667   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetURL
	I0924 18:41:48.622862   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Using libvirt version 6000000
	I0924 18:41:48.624753   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.625070   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.625087   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.625272   22837 main.go:141] libmachine: Docker is up and running!
	I0924 18:41:48.625285   22837 main.go:141] libmachine: Reticulating splines...
	I0924 18:41:48.625291   22837 client.go:171] duration metric: took 24.799650651s to LocalClient.Create
	I0924 18:41:48.625312   22837 start.go:167] duration metric: took 24.799696127s to libmachine.API.Create "ha-685475"
	I0924 18:41:48.625325   22837 start.go:293] postStartSetup for "ha-685475-m02" (driver="kvm2")
	I0924 18:41:48.625340   22837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:41:48.625360   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:48.625542   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:41:48.625572   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.627676   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.628030   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.628052   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.628180   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.628342   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.628517   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.628659   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa Username:docker}
	I0924 18:41:48.708913   22837 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 18:41:48.712956   22837 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 18:41:48.712978   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 18:41:48.713046   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 18:41:48.713130   22837 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 18:41:48.713141   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /etc/ssl/certs/109492.pem
	I0924 18:41:48.713240   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 18:41:48.722192   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:41:48.744383   22837 start.go:296] duration metric: took 119.042113ms for postStartSetup
	I0924 18:41:48.744432   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetConfigRaw
	I0924 18:41:48.745000   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetIP
	I0924 18:41:48.747573   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.747893   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.747910   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.748162   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:41:48.748334   22837 start.go:128] duration metric: took 24.940306164s to createHost
	I0924 18:41:48.748356   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.750542   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.750887   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.750911   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.751015   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.751176   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.751307   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.751425   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.751593   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:48.751774   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:48.751787   22837 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 18:41:48.851074   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727203308.831222046
	
	I0924 18:41:48.851092   22837 fix.go:216] guest clock: 1727203308.831222046
	I0924 18:41:48.851099   22837 fix.go:229] Guest: 2024-09-24 18:41:48.831222046 +0000 UTC Remote: 2024-09-24 18:41:48.748344809 +0000 UTC m=+73.162730067 (delta=82.877237ms)
	I0924 18:41:48.851113   22837 fix.go:200] guest clock delta is within tolerance: 82.877237ms
	I0924 18:41:48.851118   22837 start.go:83] releasing machines lock for "ha-685475-m02", held for 25.043203349s
	I0924 18:41:48.851134   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:48.851348   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetIP
	I0924 18:41:48.853818   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.854112   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.854136   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.856508   22837 out.go:177] * Found network options:
	I0924 18:41:48.857890   22837 out.go:177]   - NO_PROXY=192.168.39.7
	W0924 18:41:48.859133   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 18:41:48.859180   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:48.859668   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:48.859884   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:48.859962   22837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 18:41:48.860002   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	W0924 18:41:48.860062   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 18:41:48.860122   22837 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 18:41:48.860142   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.862654   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.862677   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.863021   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.863046   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.863071   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.863085   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.863235   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.863400   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.863436   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.863592   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.863623   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.863730   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.863735   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa Username:docker}
	I0924 18:41:48.863845   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa Username:docker}
	I0924 18:41:49.100910   22837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 18:41:49.106567   22837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 18:41:49.106646   22837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:41:49.123612   22837 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 18:41:49.123643   22837 start.go:495] detecting cgroup driver to use...
	I0924 18:41:49.123708   22837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 18:41:49.142937   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 18:41:49.156490   22837 docker.go:217] disabling cri-docker service (if available) ...
	I0924 18:41:49.156545   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 18:41:49.169527   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 18:41:49.182177   22837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 18:41:49.291858   22837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 18:41:49.459326   22837 docker.go:233] disabling docker service ...
	I0924 18:41:49.459396   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 18:41:49.472974   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 18:41:49.485001   22837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 18:41:49.613925   22837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 18:41:49.729893   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 18:41:49.742924   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:41:49.760372   22837 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 18:41:49.760435   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.771854   22837 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 18:41:49.771935   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.783072   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.792955   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.802788   22837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:41:49.813021   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.822734   22837 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.838535   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.848192   22837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:41:49.856844   22837 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 18:41:49.856899   22837 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 18:41:49.869401   22837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:41:49.878419   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:41:50.004449   22837 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 18:41:50.089923   22837 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 18:41:50.090004   22837 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 18:41:50.094371   22837 start.go:563] Will wait 60s for crictl version
	I0924 18:41:50.094436   22837 ssh_runner.go:195] Run: which crictl
	I0924 18:41:50.097914   22837 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 18:41:50.136366   22837 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 18:41:50.136456   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:41:50.162234   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:41:50.190445   22837 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 18:41:50.191917   22837 out.go:177]   - env NO_PROXY=192.168.39.7
	I0924 18:41:50.193261   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetIP
	I0924 18:41:50.195868   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:50.196181   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:50.196210   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:50.196416   22837 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 18:41:50.200556   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:41:50.212678   22837 mustload.go:65] Loading cluster: ha-685475
	I0924 18:41:50.212868   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:41:50.213191   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:50.213221   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:50.227693   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40331
	I0924 18:41:50.228149   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:50.228595   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:50.228613   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:50.228905   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:50.229090   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:41:50.230680   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:41:50.230980   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:50.231004   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:50.244907   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46297
	I0924 18:41:50.245219   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:50.245604   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:50.245626   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:50.245901   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:50.246055   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:50.246187   22837 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475 for IP: 192.168.39.17
	I0924 18:41:50.246201   22837 certs.go:194] generating shared ca certs ...
	I0924 18:41:50.246216   22837 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:50.246327   22837 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 18:41:50.246369   22837 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 18:41:50.246378   22837 certs.go:256] generating profile certs ...
	I0924 18:41:50.246440   22837 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key
	I0924 18:41:50.246464   22837 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.8bbab698
	I0924 18:41:50.246474   22837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.8bbab698 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.17 192.168.39.254]
	I0924 18:41:50.598027   22837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.8bbab698 ...
	I0924 18:41:50.598058   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.8bbab698: {Name:mkf8f0e99ce8df80e2d67426d0c1db2d0002fe45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:50.598227   22837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.8bbab698 ...
	I0924 18:41:50.598240   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.8bbab698: {Name:mk2fd7db9063cce26eb5db83e155e40a1d36f1b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:50.598308   22837 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.8bbab698 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt
	I0924 18:41:50.598434   22837 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.8bbab698 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key
	I0924 18:41:50.598561   22837 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key
	I0924 18:41:50.598577   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 18:41:50.598590   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 18:41:50.598601   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 18:41:50.598615   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 18:41:50.598627   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 18:41:50.598639   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 18:41:50.598651   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 18:41:50.598663   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 18:41:50.598707   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 18:41:50.598733   22837 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 18:41:50.598743   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 18:41:50.598763   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 18:41:50.598790   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:41:50.598808   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 18:41:50.598860   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:41:50.598885   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem -> /usr/share/ca-certificates/10949.pem
	I0924 18:41:50.598899   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /usr/share/ca-certificates/109492.pem
	I0924 18:41:50.598912   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:50.598943   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:50.601751   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:50.602261   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:50.602302   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:50.602435   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:50.602632   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:50.602771   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:50.602890   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:50.675173   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0924 18:41:50.679977   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0924 18:41:50.690734   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0924 18:41:50.694531   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0924 18:41:50.704513   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0924 18:41:50.708108   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0924 18:41:50.717272   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0924 18:41:50.721123   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0924 18:41:50.730473   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0924 18:41:50.733963   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0924 18:41:50.742805   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0924 18:41:50.746245   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0924 18:41:50.755896   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:41:50.779844   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 18:41:50.802343   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:41:50.824768   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 18:41:50.846513   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0924 18:41:50.868210   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 18:41:50.890482   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:41:50.912726   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 18:41:50.933992   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 18:41:50.954961   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 18:41:50.976681   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:41:50.999088   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0924 18:41:51.016166   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0924 18:41:51.032873   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0924 18:41:51.047752   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0924 18:41:51.062770   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0924 18:41:51.078108   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0924 18:41:51.093675   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0924 18:41:51.109375   22837 ssh_runner.go:195] Run: openssl version
	I0924 18:41:51.115481   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:41:51.125989   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:51.130012   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:51.130079   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:51.135264   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:41:51.144716   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 18:41:51.154096   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 18:41:51.158032   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 18:41:51.158077   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 18:41:51.163212   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 18:41:51.172662   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 18:41:51.182229   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 18:41:51.186313   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 18:41:51.186363   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 18:41:51.191704   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 18:41:51.202091   22837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:41:51.205856   22837 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 18:41:51.205922   22837 kubeadm.go:934] updating node {m02 192.168.39.17 8443 v1.31.1 crio true true} ...
	I0924 18:41:51.206011   22837 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-685475-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 18:41:51.206039   22837 kube-vip.go:115] generating kube-vip config ...
	I0924 18:41:51.206072   22837 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 18:41:51.221517   22837 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 18:41:51.221584   22837 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0924 18:41:51.221651   22837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:41:51.229924   22837 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0924 18:41:51.229982   22837 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0924 18:41:51.238555   22837 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0924 18:41:51.238577   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 18:41:51.238641   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 18:41:51.238665   22837 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0924 18:41:51.238675   22837 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0924 18:41:51.242749   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0924 18:41:51.242771   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0924 18:41:51.999295   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 18:41:51.999376   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 18:41:52.004346   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0924 18:41:52.004382   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0924 18:41:52.162918   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:41:52.197388   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 18:41:52.197497   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 18:41:52.207217   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0924 18:41:52.207268   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0924 18:41:52.538567   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0924 18:41:52.547052   22837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0924 18:41:52.561548   22837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:41:52.576215   22837 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0924 18:41:52.591227   22837 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 18:41:52.594529   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:41:52.604896   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:41:52.719375   22837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:41:52.736097   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:41:52.736483   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:52.736538   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:52.752065   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36433
	I0924 18:41:52.752444   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:52.752959   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:52.752982   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:52.753304   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:52.753474   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:52.753613   22837 start.go:317] joinCluster: &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:41:52.753696   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0924 18:41:52.753710   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:52.756694   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:52.757114   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:52.757131   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:52.757308   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:52.757468   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:52.757629   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:52.757745   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:52.888925   22837 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:41:52.888975   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7fwv7s.uj3o27m19d4lbaxl --discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-685475-m02 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443"
	I0924 18:42:11.743600   22837 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7fwv7s.uj3o27m19d4lbaxl --discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-685475-m02 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443": (18.8545724s)
	I0924 18:42:11.743651   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0924 18:42:12.256325   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-685475-m02 minikube.k8s.io/updated_at=2024_09_24T18_42_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=ha-685475 minikube.k8s.io/primary=false
	I0924 18:42:12.517923   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-685475-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0924 18:42:12.615905   22837 start.go:319] duration metric: took 19.86228628s to joinCluster
	I0924 18:42:12.616009   22837 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:42:12.616334   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:42:12.617637   22837 out.go:177] * Verifying Kubernetes components...
	I0924 18:42:12.618871   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:42:12.853779   22837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:42:12.878467   22837 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:42:12.878815   22837 kapi.go:59] client config for ha-685475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key", CAFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0924 18:42:12.878931   22837 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.7:8443
	I0924 18:42:12.879186   22837 node_ready.go:35] waiting up to 6m0s for node "ha-685475-m02" to be "Ready" ...
	I0924 18:42:12.879290   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:12.879301   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:12.879309   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:12.879314   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:12.895218   22837 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0924 18:42:13.380409   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:13.380434   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:13.380445   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:13.380450   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:13.385029   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:13.879387   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:13.879410   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:13.879422   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:13.879428   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:13.883592   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:14.380062   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:14.380082   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:14.380090   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:14.380095   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:14.397523   22837 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0924 18:42:14.879492   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:14.879513   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:14.879520   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:14.879526   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:14.882118   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:14.882608   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:15.380119   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:15.380151   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:15.380164   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:15.380170   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:15.383053   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:15.879674   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:15.879694   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:15.879702   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:15.879708   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:15.882714   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:16.379456   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:16.379481   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:16.379490   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:16.379493   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:16.383195   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:16.880066   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:16.880089   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:16.880098   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:16.880105   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:16.882954   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:16.883690   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:17.380052   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:17.380084   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:17.380093   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:17.380096   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:17.384312   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:17.879766   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:17.879786   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:17.879794   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:17.879799   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:17.882650   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:18.379440   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:18.379460   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:18.379468   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:18.379474   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:18.382655   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:18.879894   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:18.879916   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:18.879925   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:18.879931   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:18.883892   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:18.884363   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:19.379514   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:19.379537   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:19.379549   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:19.379555   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:19.383053   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:19.880045   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:19.880066   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:19.880075   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:19.880080   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:19.883375   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:20.380221   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:20.380247   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:20.380256   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:20.380261   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:20.383167   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:20.879751   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:20.879771   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:20.879780   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:20.879784   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:20.883632   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:21.379420   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:21.379440   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:21.379449   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:21.379454   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:21.382852   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:21.383642   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:21.880087   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:21.880120   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:21.880142   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:21.880147   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:21.883894   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:22.379995   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:22.380016   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:22.380024   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:22.380028   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:22.383198   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:22.879355   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:22.879379   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:22.879389   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:22.879394   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:22.882598   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:23.380170   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:23.380191   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:23.380198   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:23.380201   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:23.383280   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:23.383852   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:23.879484   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:23.879505   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:23.879514   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:23.879518   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:23.882485   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:24.380050   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:24.380072   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:24.380080   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:24.380084   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:24.383563   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:24.880157   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:24.880189   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:24.880201   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:24.880208   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:24.883633   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:25.379493   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:25.379514   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:25.379522   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:25.379527   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:25.382668   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:25.880369   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:25.880389   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:25.880398   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:25.880401   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:25.884483   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:25.884968   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:26.380398   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:26.380418   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:26.380426   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:26.380431   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:26.384043   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:26.880095   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:26.880120   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:26.880131   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:26.880136   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:26.884191   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:27.380154   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:27.380180   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:27.380192   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:27.380199   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:27.383272   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:27.879506   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:27.879528   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:27.879539   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:27.879556   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:27.882360   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:28.380188   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:28.380208   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:28.380217   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:28.380222   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:28.383324   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:28.384179   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:28.880029   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:28.880052   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:28.880064   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:28.880072   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:28.883130   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:29.380071   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:29.380098   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:29.380110   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:29.380117   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:29.383220   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:29.880044   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:29.880064   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:29.880072   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:29.880077   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:29.883469   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:30.379846   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:30.379865   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.379873   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.379877   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.382760   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.880337   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:30.880358   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.880367   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.880371   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.883587   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:30.884005   22837 node_ready.go:49] node "ha-685475-m02" has status "Ready":"True"
	I0924 18:42:30.884024   22837 node_ready.go:38] duration metric: took 18.004817095s for node "ha-685475-m02" to be "Ready" ...
	I0924 18:42:30.884035   22837 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:42:30.884109   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:42:30.884120   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.884130   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.884136   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.889226   22837 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 18:42:30.898516   22837 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.898598   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fchhl
	I0924 18:42:30.898608   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.898616   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.898621   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.901236   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.901749   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:30.901762   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.901769   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.901773   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.903992   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.904550   22837 pod_ready.go:93] pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:30.904563   22837 pod_ready.go:82] duration metric: took 6.024673ms for pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.904570   22837 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.904619   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jf7wr
	I0924 18:42:30.904627   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.904634   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.904639   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.907019   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.907540   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:30.907554   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.907560   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.907564   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.909829   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.910347   22837 pod_ready.go:93] pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:30.910361   22837 pod_ready.go:82] duration metric: took 5.783749ms for pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.910369   22837 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.910412   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475
	I0924 18:42:30.910421   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.910427   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.910431   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.912745   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.913606   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:30.913622   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.913632   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.913639   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.916274   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.916867   22837 pod_ready.go:93] pod "etcd-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:30.916881   22837 pod_ready.go:82] duration metric: took 6.50607ms for pod "etcd-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.916889   22837 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.916939   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475-m02
	I0924 18:42:30.916948   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.916955   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.916960   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.919434   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.919982   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:30.919996   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.920003   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.920007   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.921770   22837 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0924 18:42:30.922347   22837 pod_ready.go:93] pod "etcd-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:30.922367   22837 pod_ready.go:82] duration metric: took 5.471344ms for pod "etcd-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.922386   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:31.080824   22837 request.go:632] Waited for 158.3458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475
	I0924 18:42:31.080885   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475
	I0924 18:42:31.080893   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:31.080904   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:31.080910   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:31.084145   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:31.281150   22837 request.go:632] Waited for 196.368053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:31.281219   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:31.281226   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:31.281237   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:31.281243   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:31.284822   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:31.285606   22837 pod_ready.go:93] pod "kube-apiserver-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:31.285626   22837 pod_ready.go:82] duration metric: took 363.227315ms for pod "kube-apiserver-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:31.285638   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:31.480778   22837 request.go:632] Waited for 195.072153ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m02
	I0924 18:42:31.480848   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m02
	I0924 18:42:31.480855   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:31.480868   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:31.480875   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:31.484120   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:31.681047   22837 request.go:632] Waited for 196.341286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:31.681125   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:31.681133   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:31.681148   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:31.681151   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:31.684093   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:31.684648   22837 pod_ready.go:93] pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:31.684666   22837 pod_ready.go:82] duration metric: took 399.019878ms for pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:31.684678   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:31.880772   22837 request.go:632] Waited for 196.018851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475
	I0924 18:42:31.880838   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475
	I0924 18:42:31.880846   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:31.880865   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:31.880873   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:31.884578   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:32.080481   22837 request.go:632] Waited for 195.272795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:32.080548   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:32.080556   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:32.080567   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:32.080574   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:32.083669   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:32.084153   22837 pod_ready.go:93] pod "kube-controller-manager-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:32.084170   22837 pod_ready.go:82] duration metric: took 399.485153ms for pod "kube-controller-manager-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:32.084179   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:32.281286   22837 request.go:632] Waited for 197.043639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m02
	I0924 18:42:32.281361   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m02
	I0924 18:42:32.281367   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:32.281374   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:32.281379   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:32.284317   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:32.481341   22837 request.go:632] Waited for 196.394211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:32.481408   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:32.481414   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:32.481423   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:32.481426   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:32.484712   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:32.485108   22837 pod_ready.go:93] pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:32.485126   22837 pod_ready.go:82] duration metric: took 400.941479ms for pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:32.485135   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b8x2w" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:32.681315   22837 request.go:632] Waited for 196.100251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8x2w
	I0924 18:42:32.681368   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8x2w
	I0924 18:42:32.681374   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:32.681382   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:32.681387   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:32.684555   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:32.880797   22837 request.go:632] Waited for 195.427595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:32.880867   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:32.880875   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:32.880886   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:32.880916   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:32.884757   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:32.885225   22837 pod_ready.go:93] pod "kube-proxy-b8x2w" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:32.885244   22837 pod_ready.go:82] duration metric: took 400.103235ms for pod "kube-proxy-b8x2w" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:32.885253   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dlr8f" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:33.080631   22837 request.go:632] Waited for 195.310618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlr8f
	I0924 18:42:33.080696   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlr8f
	I0924 18:42:33.080703   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:33.080712   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:33.080718   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:33.084028   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:33.281072   22837 request.go:632] Waited for 196.37227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:33.281123   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:33.281128   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:33.281136   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:33.281140   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:33.284485   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:33.285140   22837 pod_ready.go:93] pod "kube-proxy-dlr8f" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:33.285160   22837 pod_ready.go:82] duration metric: took 399.900589ms for pod "kube-proxy-dlr8f" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:33.285169   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:33.481228   22837 request.go:632] Waited for 196.007394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475
	I0924 18:42:33.481285   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475
	I0924 18:42:33.481290   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:33.481297   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:33.481301   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:33.484526   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:33.680916   22837 request.go:632] Waited for 195.378531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:33.681003   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:33.681014   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:33.681027   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:33.681033   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:33.683790   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:33.684472   22837 pod_ready.go:93] pod "kube-scheduler-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:33.684489   22837 pod_ready.go:82] duration metric: took 399.314616ms for pod "kube-scheduler-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:33.684498   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:33.880975   22837 request.go:632] Waited for 196.408433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m02
	I0924 18:42:33.881026   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m02
	I0924 18:42:33.881031   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:33.881038   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:33.881043   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:33.884212   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:34.081232   22837 request.go:632] Waited for 196.342139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:34.081301   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:34.081312   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.081340   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.081347   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.084215   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:34.084885   22837 pod_ready.go:93] pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:34.084905   22837 pod_ready.go:82] duration metric: took 400.399835ms for pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:34.084918   22837 pod_ready.go:39] duration metric: took 3.200860786s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:42:34.084956   22837 api_server.go:52] waiting for apiserver process to appear ...
	I0924 18:42:34.085018   22837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:42:34.099253   22837 api_server.go:72] duration metric: took 21.483198905s to wait for apiserver process to appear ...
	I0924 18:42:34.099269   22837 api_server.go:88] waiting for apiserver healthz status ...
	I0924 18:42:34.099293   22837 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0924 18:42:34.103172   22837 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0924 18:42:34.103230   22837 round_trippers.go:463] GET https://192.168.39.7:8443/version
	I0924 18:42:34.103238   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.103245   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.103249   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.104031   22837 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0924 18:42:34.104219   22837 api_server.go:141] control plane version: v1.31.1
	I0924 18:42:34.104236   22837 api_server.go:131] duration metric: took 4.961214ms to wait for apiserver health ...
	I0924 18:42:34.104242   22837 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 18:42:34.280630   22837 request.go:632] Waited for 176.320456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:42:34.280681   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:42:34.280686   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.280694   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.280697   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.284696   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:34.289267   22837 system_pods.go:59] 17 kube-system pods found
	I0924 18:42:34.289298   22837 system_pods.go:61] "coredns-7c65d6cfc9-fchhl" [dc58fefc-6210-4b70-bd0d-dbf5b093e09a] Running
	I0924 18:42:34.289303   22837 system_pods.go:61] "coredns-7c65d6cfc9-jf7wr" [a616493e-082e-4ae6-8e12-8c4a2b37a985] Running
	I0924 18:42:34.289307   22837 system_pods.go:61] "etcd-ha-685475" [f76413e6-46f1-4914-9ba4-719c8f2b098b] Running
	I0924 18:42:34.289312   22837 system_pods.go:61] "etcd-ha-685475-m02" [f37ad824-aa9c-42e9-b9fa-82423aab2a30] Running
	I0924 18:42:34.289315   22837 system_pods.go:61] "kindnet-ms6qb" [60485f55-3830-4897-b38e-55779662b999] Running
	I0924 18:42:34.289318   22837 system_pods.go:61] "kindnet-pwvfj" [e47e9124-c023-41f2-8b05-5fde3cf09dc1] Running
	I0924 18:42:34.289322   22837 system_pods.go:61] "kube-apiserver-ha-685475" [f7dc1ef7-fba6-48c4-8868-de5eccdbbea3] Running
	I0924 18:42:34.289325   22837 system_pods.go:61] "kube-apiserver-ha-685475-m02" [96b5dd69-0cc4-42d9-a42e-b1665ab1890a] Running
	I0924 18:42:34.289329   22837 system_pods.go:61] "kube-controller-manager-ha-685475" [3d40caef-e1c5-4e4b-9908-cf2767bb686f] Running
	I0924 18:42:34.289333   22837 system_pods.go:61] "kube-controller-manager-ha-685475-m02" [0fb0ca36-0340-49f7-8c5d-acf933c181ad] Running
	I0924 18:42:34.289335   22837 system_pods.go:61] "kube-proxy-b8x2w" [95e65f4e-7461-479a-8743-ce4f891abfcf] Running
	I0924 18:42:34.289339   22837 system_pods.go:61] "kube-proxy-dlr8f" [e463fdb8-b27f-4e4a-8887-6534c92a21aa] Running
	I0924 18:42:34.289341   22837 system_pods.go:61] "kube-scheduler-ha-685475" [b82f1f3f-4c7a-49b3-9dab-ba6dfdd3c2ed] Running
	I0924 18:42:34.289344   22837 system_pods.go:61] "kube-scheduler-ha-685475-m02" [53e1a4b3-4e3a-4d14-9cdf-eedbf83877b4] Running
	I0924 18:42:34.289351   22837 system_pods.go:61] "kube-vip-ha-685475" [ad2ed915-5276-4ba2-b097-df9074e8c2ef] Running
	I0924 18:42:34.289355   22837 system_pods.go:61] "kube-vip-ha-685475-m02" [916f0d4d-70d4-4347-9337-84e5c77ca834] Running
	I0924 18:42:34.289357   22837 system_pods.go:61] "storage-provisioner" [e0f5497a-ae6d-4051-b1bc-c84c91d0fd12] Running
	I0924 18:42:34.289363   22837 system_pods.go:74] duration metric: took 185.114229ms to wait for pod list to return data ...
	I0924 18:42:34.289371   22837 default_sa.go:34] waiting for default service account to be created ...
	I0924 18:42:34.480833   22837 request.go:632] Waited for 191.389799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0924 18:42:34.480905   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0924 18:42:34.480912   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.480920   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.480925   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.484374   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:34.484575   22837 default_sa.go:45] found service account: "default"
	I0924 18:42:34.484590   22837 default_sa.go:55] duration metric: took 195.213451ms for default service account to be created ...
	I0924 18:42:34.484598   22837 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 18:42:34.681020   22837 request.go:632] Waited for 196.354693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:42:34.681092   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:42:34.681097   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.681105   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.681113   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.685266   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:34.689541   22837 system_pods.go:86] 17 kube-system pods found
	I0924 18:42:34.689565   22837 system_pods.go:89] "coredns-7c65d6cfc9-fchhl" [dc58fefc-6210-4b70-bd0d-dbf5b093e09a] Running
	I0924 18:42:34.689571   22837 system_pods.go:89] "coredns-7c65d6cfc9-jf7wr" [a616493e-082e-4ae6-8e12-8c4a2b37a985] Running
	I0924 18:42:34.689574   22837 system_pods.go:89] "etcd-ha-685475" [f76413e6-46f1-4914-9ba4-719c8f2b098b] Running
	I0924 18:42:34.689578   22837 system_pods.go:89] "etcd-ha-685475-m02" [f37ad824-aa9c-42e9-b9fa-82423aab2a30] Running
	I0924 18:42:34.689581   22837 system_pods.go:89] "kindnet-ms6qb" [60485f55-3830-4897-b38e-55779662b999] Running
	I0924 18:42:34.689585   22837 system_pods.go:89] "kindnet-pwvfj" [e47e9124-c023-41f2-8b05-5fde3cf09dc1] Running
	I0924 18:42:34.689588   22837 system_pods.go:89] "kube-apiserver-ha-685475" [f7dc1ef7-fba6-48c4-8868-de5eccdbbea3] Running
	I0924 18:42:34.689593   22837 system_pods.go:89] "kube-apiserver-ha-685475-m02" [96b5dd69-0cc4-42d9-a42e-b1665ab1890a] Running
	I0924 18:42:34.689598   22837 system_pods.go:89] "kube-controller-manager-ha-685475" [3d40caef-e1c5-4e4b-9908-cf2767bb686f] Running
	I0924 18:42:34.689603   22837 system_pods.go:89] "kube-controller-manager-ha-685475-m02" [0fb0ca36-0340-49f7-8c5d-acf933c181ad] Running
	I0924 18:42:34.689608   22837 system_pods.go:89] "kube-proxy-b8x2w" [95e65f4e-7461-479a-8743-ce4f891abfcf] Running
	I0924 18:42:34.689616   22837 system_pods.go:89] "kube-proxy-dlr8f" [e463fdb8-b27f-4e4a-8887-6534c92a21aa] Running
	I0924 18:42:34.689623   22837 system_pods.go:89] "kube-scheduler-ha-685475" [b82f1f3f-4c7a-49b3-9dab-ba6dfdd3c2ed] Running
	I0924 18:42:34.689633   22837 system_pods.go:89] "kube-scheduler-ha-685475-m02" [53e1a4b3-4e3a-4d14-9cdf-eedbf83877b4] Running
	I0924 18:42:34.689638   22837 system_pods.go:89] "kube-vip-ha-685475" [ad2ed915-5276-4ba2-b097-df9074e8c2ef] Running
	I0924 18:42:34.689642   22837 system_pods.go:89] "kube-vip-ha-685475-m02" [916f0d4d-70d4-4347-9337-84e5c77ca834] Running
	I0924 18:42:34.689646   22837 system_pods.go:89] "storage-provisioner" [e0f5497a-ae6d-4051-b1bc-c84c91d0fd12] Running
	I0924 18:42:34.689652   22837 system_pods.go:126] duration metric: took 205.048658ms to wait for k8s-apps to be running ...
	I0924 18:42:34.689667   22837 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 18:42:34.689711   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:42:34.702696   22837 system_svc.go:56] duration metric: took 13.022824ms WaitForService to wait for kubelet
	I0924 18:42:34.702718   22837 kubeadm.go:582] duration metric: took 22.086667119s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:42:34.702741   22837 node_conditions.go:102] verifying NodePressure condition ...
	I0924 18:42:34.881196   22837 request.go:632] Waited for 178.393564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes
	I0924 18:42:34.881289   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes
	I0924 18:42:34.881300   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.881308   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.881314   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.885104   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:34.885818   22837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:42:34.885841   22837 node_conditions.go:123] node cpu capacity is 2
	I0924 18:42:34.885858   22837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:42:34.885862   22837 node_conditions.go:123] node cpu capacity is 2
	I0924 18:42:34.885866   22837 node_conditions.go:105] duration metric: took 183.120221ms to run NodePressure ...
	I0924 18:42:34.885879   22837 start.go:241] waiting for startup goroutines ...
	I0924 18:42:34.885917   22837 start.go:255] writing updated cluster config ...
	I0924 18:42:34.888071   22837 out.go:201] 
	I0924 18:42:34.889729   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:42:34.889845   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:42:34.891554   22837 out.go:177] * Starting "ha-685475-m03" control-plane node in "ha-685475" cluster
	I0924 18:42:34.893081   22837 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:42:34.893105   22837 cache.go:56] Caching tarball of preloaded images
	I0924 18:42:34.893223   22837 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 18:42:34.893237   22837 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 18:42:34.893331   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:42:34.893543   22837 start.go:360] acquireMachinesLock for ha-685475-m03: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 18:42:34.893593   22837 start.go:364] duration metric: took 31.193µs to acquireMachinesLock for "ha-685475-m03"
	I0924 18:42:34.893622   22837 start.go:93] Provisioning new machine with config: &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-g
adget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:42:34.893742   22837 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0924 18:42:34.895364   22837 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 18:42:34.895477   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:42:34.895520   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:42:34.910309   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36349
	I0924 18:42:34.910707   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:42:34.911166   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:42:34.911189   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:42:34.911445   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:42:34.911666   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetMachineName
	I0924 18:42:34.911812   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:42:34.911970   22837 start.go:159] libmachine.API.Create for "ha-685475" (driver="kvm2")
	I0924 18:42:34.912006   22837 client.go:168] LocalClient.Create starting
	I0924 18:42:34.912049   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem
	I0924 18:42:34.912087   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:42:34.912107   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:42:34.912168   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem
	I0924 18:42:34.912193   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:42:34.912206   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:42:34.912226   22837 main.go:141] libmachine: Running pre-create checks...
	I0924 18:42:34.912234   22837 main.go:141] libmachine: (ha-685475-m03) Calling .PreCreateCheck
	I0924 18:42:34.912354   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetConfigRaw
	I0924 18:42:34.912664   22837 main.go:141] libmachine: Creating machine...
	I0924 18:42:34.912675   22837 main.go:141] libmachine: (ha-685475-m03) Calling .Create
	I0924 18:42:34.912804   22837 main.go:141] libmachine: (ha-685475-m03) Creating KVM machine...
	I0924 18:42:34.914072   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found existing default KVM network
	I0924 18:42:34.914216   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found existing private KVM network mk-ha-685475
	I0924 18:42:34.914343   22837 main.go:141] libmachine: (ha-685475-m03) Setting up store path in /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03 ...
	I0924 18:42:34.914367   22837 main.go:141] libmachine: (ha-685475-m03) Building disk image from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 18:42:34.914418   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:34.914332   23604 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:42:34.914495   22837 main.go:141] libmachine: (ha-685475-m03) Downloading /home/jenkins/minikube-integration/19700-3751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 18:42:35.139279   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:35.139122   23604 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa...
	I0924 18:42:35.223317   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:35.223211   23604 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/ha-685475-m03.rawdisk...
	I0924 18:42:35.223345   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Writing magic tar header
	I0924 18:42:35.223358   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Writing SSH key tar header
	I0924 18:42:35.223365   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:35.223334   23604 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03 ...
	I0924 18:42:35.223430   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03
	I0924 18:42:35.223477   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03 (perms=drwx------)
	I0924 18:42:35.223494   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines
	I0924 18:42:35.223501   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines (perms=drwxr-xr-x)
	I0924 18:42:35.223508   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:42:35.223518   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube (perms=drwxr-xr-x)
	I0924 18:42:35.223529   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751 (perms=drwxrwxr-x)
	I0924 18:42:35.223535   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 18:42:35.223544   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 18:42:35.223549   22837 main.go:141] libmachine: (ha-685475-m03) Creating domain...
	I0924 18:42:35.223557   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751
	I0924 18:42:35.223562   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 18:42:35.223568   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins
	I0924 18:42:35.223575   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home
	I0924 18:42:35.223580   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Skipping /home - not owner
	I0924 18:42:35.224656   22837 main.go:141] libmachine: (ha-685475-m03) define libvirt domain using xml: 
	I0924 18:42:35.224680   22837 main.go:141] libmachine: (ha-685475-m03) <domain type='kvm'>
	I0924 18:42:35.224689   22837 main.go:141] libmachine: (ha-685475-m03)   <name>ha-685475-m03</name>
	I0924 18:42:35.224694   22837 main.go:141] libmachine: (ha-685475-m03)   <memory unit='MiB'>2200</memory>
	I0924 18:42:35.224699   22837 main.go:141] libmachine: (ha-685475-m03)   <vcpu>2</vcpu>
	I0924 18:42:35.224704   22837 main.go:141] libmachine: (ha-685475-m03)   <features>
	I0924 18:42:35.224709   22837 main.go:141] libmachine: (ha-685475-m03)     <acpi/>
	I0924 18:42:35.224713   22837 main.go:141] libmachine: (ha-685475-m03)     <apic/>
	I0924 18:42:35.224718   22837 main.go:141] libmachine: (ha-685475-m03)     <pae/>
	I0924 18:42:35.224722   22837 main.go:141] libmachine: (ha-685475-m03)     
	I0924 18:42:35.224730   22837 main.go:141] libmachine: (ha-685475-m03)   </features>
	I0924 18:42:35.224736   22837 main.go:141] libmachine: (ha-685475-m03)   <cpu mode='host-passthrough'>
	I0924 18:42:35.224742   22837 main.go:141] libmachine: (ha-685475-m03)   
	I0924 18:42:35.224746   22837 main.go:141] libmachine: (ha-685475-m03)   </cpu>
	I0924 18:42:35.224750   22837 main.go:141] libmachine: (ha-685475-m03)   <os>
	I0924 18:42:35.224756   22837 main.go:141] libmachine: (ha-685475-m03)     <type>hvm</type>
	I0924 18:42:35.224761   22837 main.go:141] libmachine: (ha-685475-m03)     <boot dev='cdrom'/>
	I0924 18:42:35.224770   22837 main.go:141] libmachine: (ha-685475-m03)     <boot dev='hd'/>
	I0924 18:42:35.224784   22837 main.go:141] libmachine: (ha-685475-m03)     <bootmenu enable='no'/>
	I0924 18:42:35.224794   22837 main.go:141] libmachine: (ha-685475-m03)   </os>
	I0924 18:42:35.224799   22837 main.go:141] libmachine: (ha-685475-m03)   <devices>
	I0924 18:42:35.224808   22837 main.go:141] libmachine: (ha-685475-m03)     <disk type='file' device='cdrom'>
	I0924 18:42:35.224840   22837 main.go:141] libmachine: (ha-685475-m03)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/boot2docker.iso'/>
	I0924 18:42:35.224861   22837 main.go:141] libmachine: (ha-685475-m03)       <target dev='hdc' bus='scsi'/>
	I0924 18:42:35.224871   22837 main.go:141] libmachine: (ha-685475-m03)       <readonly/>
	I0924 18:42:35.224885   22837 main.go:141] libmachine: (ha-685475-m03)     </disk>
	I0924 18:42:35.224898   22837 main.go:141] libmachine: (ha-685475-m03)     <disk type='file' device='disk'>
	I0924 18:42:35.224908   22837 main.go:141] libmachine: (ha-685475-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 18:42:35.224920   22837 main.go:141] libmachine: (ha-685475-m03)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/ha-685475-m03.rawdisk'/>
	I0924 18:42:35.224939   22837 main.go:141] libmachine: (ha-685475-m03)       <target dev='hda' bus='virtio'/>
	I0924 18:42:35.224949   22837 main.go:141] libmachine: (ha-685475-m03)     </disk>
	I0924 18:42:35.224954   22837 main.go:141] libmachine: (ha-685475-m03)     <interface type='network'>
	I0924 18:42:35.225004   22837 main.go:141] libmachine: (ha-685475-m03)       <source network='mk-ha-685475'/>
	I0924 18:42:35.225029   22837 main.go:141] libmachine: (ha-685475-m03)       <model type='virtio'/>
	I0924 18:42:35.225048   22837 main.go:141] libmachine: (ha-685475-m03)     </interface>
	I0924 18:42:35.225067   22837 main.go:141] libmachine: (ha-685475-m03)     <interface type='network'>
	I0924 18:42:35.225079   22837 main.go:141] libmachine: (ha-685475-m03)       <source network='default'/>
	I0924 18:42:35.225088   22837 main.go:141] libmachine: (ha-685475-m03)       <model type='virtio'/>
	I0924 18:42:35.225094   22837 main.go:141] libmachine: (ha-685475-m03)     </interface>
	I0924 18:42:35.225101   22837 main.go:141] libmachine: (ha-685475-m03)     <serial type='pty'>
	I0924 18:42:35.225106   22837 main.go:141] libmachine: (ha-685475-m03)       <target port='0'/>
	I0924 18:42:35.225112   22837 main.go:141] libmachine: (ha-685475-m03)     </serial>
	I0924 18:42:35.225118   22837 main.go:141] libmachine: (ha-685475-m03)     <console type='pty'>
	I0924 18:42:35.225124   22837 main.go:141] libmachine: (ha-685475-m03)       <target type='serial' port='0'/>
	I0924 18:42:35.225131   22837 main.go:141] libmachine: (ha-685475-m03)     </console>
	I0924 18:42:35.225144   22837 main.go:141] libmachine: (ha-685475-m03)     <rng model='virtio'>
	I0924 18:42:35.225156   22837 main.go:141] libmachine: (ha-685475-m03)       <backend model='random'>/dev/random</backend>
	I0924 18:42:35.225167   22837 main.go:141] libmachine: (ha-685475-m03)     </rng>
	I0924 18:42:35.225176   22837 main.go:141] libmachine: (ha-685475-m03)     
	I0924 18:42:35.225183   22837 main.go:141] libmachine: (ha-685475-m03)     
	I0924 18:42:35.225192   22837 main.go:141] libmachine: (ha-685475-m03)   </devices>
	I0924 18:42:35.225202   22837 main.go:141] libmachine: (ha-685475-m03) </domain>
	I0924 18:42:35.225210   22837 main.go:141] libmachine: (ha-685475-m03) 
	I0924 18:42:35.232041   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:d0:37:5a in network default
	I0924 18:42:35.232661   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:35.232681   22837 main.go:141] libmachine: (ha-685475-m03) Ensuring networks are active...
	I0924 18:42:35.233409   22837 main.go:141] libmachine: (ha-685475-m03) Ensuring network default is active
	I0924 18:42:35.233744   22837 main.go:141] libmachine: (ha-685475-m03) Ensuring network mk-ha-685475 is active
	I0924 18:42:35.234266   22837 main.go:141] libmachine: (ha-685475-m03) Getting domain xml...
	I0924 18:42:35.235093   22837 main.go:141] libmachine: (ha-685475-m03) Creating domain...
	I0924 18:42:36.442620   22837 main.go:141] libmachine: (ha-685475-m03) Waiting to get IP...
	I0924 18:42:36.443397   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:36.443765   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:36.443802   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:36.443732   23604 retry.go:31] will retry after 244.798943ms: waiting for machine to come up
	I0924 18:42:36.690206   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:36.690698   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:36.690720   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:36.690654   23604 retry.go:31] will retry after 308.672235ms: waiting for machine to come up
	I0924 18:42:37.000890   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:37.001339   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:37.001369   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:37.001302   23604 retry.go:31] will retry after 346.180057ms: waiting for machine to come up
	I0924 18:42:37.348700   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:37.349107   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:37.349134   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:37.349075   23604 retry.go:31] will retry after 530.317337ms: waiting for machine to come up
	I0924 18:42:37.881459   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:37.882098   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:37.882122   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:37.882050   23604 retry.go:31] will retry after 620.764429ms: waiting for machine to come up
	I0924 18:42:38.504892   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:38.505327   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:38.505356   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:38.505288   23604 retry.go:31] will retry after 656.642966ms: waiting for machine to come up
	I0924 18:42:39.163234   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:39.163670   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:39.163696   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:39.163622   23604 retry.go:31] will retry after 804.533823ms: waiting for machine to come up
	I0924 18:42:39.969249   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:39.969758   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:39.969781   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:39.969719   23604 retry.go:31] will retry after 1.112599979s: waiting for machine to come up
	I0924 18:42:41.083861   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:41.084304   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:41.084326   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:41.084250   23604 retry.go:31] will retry after 1.484881709s: waiting for machine to come up
	I0924 18:42:42.570773   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:42.571260   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:42.571291   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:42.571214   23604 retry.go:31] will retry after 1.470650116s: waiting for machine to come up
	I0924 18:42:44.043746   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:44.044161   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:44.044186   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:44.044127   23604 retry.go:31] will retry after 2.749899674s: waiting for machine to come up
	I0924 18:42:46.796154   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:46.796548   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:46.796586   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:46.796499   23604 retry.go:31] will retry after 2.668083753s: waiting for machine to come up
	I0924 18:42:49.467725   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:49.468171   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:49.468196   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:49.468125   23604 retry.go:31] will retry after 4.505913039s: waiting for machine to come up
	I0924 18:42:53.976055   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:53.976513   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:53.976533   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:53.976473   23604 retry.go:31] will retry after 5.05928848s: waiting for machine to come up
	I0924 18:42:59.039895   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.040268   22837 main.go:141] libmachine: (ha-685475-m03) Found IP for machine: 192.168.39.84
	I0924 18:42:59.040292   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has current primary IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.040302   22837 main.go:141] libmachine: (ha-685475-m03) Reserving static IP address...
	I0924 18:42:59.040633   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find host DHCP lease matching {name: "ha-685475-m03", mac: "52:54:00:47:f3:5c", ip: "192.168.39.84"} in network mk-ha-685475
	I0924 18:42:59.109971   22837 main.go:141] libmachine: (ha-685475-m03) Reserved static IP address: 192.168.39.84
	I0924 18:42:59.110001   22837 main.go:141] libmachine: (ha-685475-m03) Waiting for SSH to be available...
	I0924 18:42:59.110011   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Getting to WaitForSSH function...
	I0924 18:42:59.112837   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.113218   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:minikube Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.113243   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.113377   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Using SSH client type: external
	I0924 18:42:59.113400   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa (-rw-------)
	I0924 18:42:59.113429   22837 main.go:141] libmachine: (ha-685475-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:42:59.113441   22837 main.go:141] libmachine: (ha-685475-m03) DBG | About to run SSH command:
	I0924 18:42:59.113458   22837 main.go:141] libmachine: (ha-685475-m03) DBG | exit 0
	I0924 18:42:59.234787   22837 main.go:141] libmachine: (ha-685475-m03) DBG | SSH cmd err, output: <nil>: 
	I0924 18:42:59.235096   22837 main.go:141] libmachine: (ha-685475-m03) KVM machine creation complete!
	I0924 18:42:59.235444   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetConfigRaw
	I0924 18:42:59.235990   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:42:59.236156   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:42:59.236834   22837 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 18:42:59.236851   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetState
	I0924 18:42:59.238058   22837 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 18:42:59.238082   22837 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 18:42:59.238089   22837 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 18:42:59.238099   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:42:59.241168   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.241742   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.241769   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.241929   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:42:59.242092   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.242231   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.242340   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:42:59.242506   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:42:59.242695   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:42:59.242706   22837 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 18:42:59.337829   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:42:59.337850   22837 main.go:141] libmachine: Detecting the provisioner...
	I0924 18:42:59.337860   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:42:59.340431   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.340774   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.340806   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.340930   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:42:59.341115   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.341253   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.341386   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:42:59.341535   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:42:59.341719   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:42:59.341733   22837 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 18:42:59.439659   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 18:42:59.439743   22837 main.go:141] libmachine: found compatible host: buildroot
	I0924 18:42:59.439756   22837 main.go:141] libmachine: Provisioning with buildroot...
	I0924 18:42:59.439767   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetMachineName
	I0924 18:42:59.440013   22837 buildroot.go:166] provisioning hostname "ha-685475-m03"
	I0924 18:42:59.440035   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetMachineName
	I0924 18:42:59.440208   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:42:59.443110   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.443453   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.443484   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.443628   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:42:59.443776   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.443925   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.444043   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:42:59.444195   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:42:59.444388   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:42:59.444405   22837 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-685475-m03 && echo "ha-685475-m03" | sudo tee /etc/hostname
	I0924 18:42:59.552104   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685475-m03
	
	I0924 18:42:59.552146   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:42:59.555198   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.555610   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.555635   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.555825   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:42:59.555999   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.556210   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.556377   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:42:59.556530   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:42:59.556692   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:42:59.556725   22837 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-685475-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-685475-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-685475-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 18:42:59.663026   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:42:59.663065   22837 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 18:42:59.663091   22837 buildroot.go:174] setting up certificates
	I0924 18:42:59.663104   22837 provision.go:84] configureAuth start
	I0924 18:42:59.663128   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetMachineName
	I0924 18:42:59.663405   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetIP
	I0924 18:42:59.666046   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.666433   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.666453   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.666616   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:42:59.668726   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.669069   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.669093   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.669219   22837 provision.go:143] copyHostCerts
	I0924 18:42:59.669250   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:42:59.669289   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 18:42:59.669299   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:42:59.669379   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 18:42:59.669484   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:42:59.669511   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 18:42:59.669521   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:42:59.669559   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 18:42:59.669627   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:42:59.669655   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 18:42:59.669664   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:42:59.669698   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 18:42:59.669771   22837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.ha-685475-m03 san=[127.0.0.1 192.168.39.84 ha-685475-m03 localhost minikube]
	I0924 18:43:00.034638   22837 provision.go:177] copyRemoteCerts
	I0924 18:43:00.034686   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 18:43:00.034707   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:43:00.037567   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.037972   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.037994   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.038177   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.038367   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.038523   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.038654   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa Username:docker}
	I0924 18:43:00.116658   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 18:43:00.116731   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 18:43:00.138751   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 18:43:00.138812   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 18:43:00.160322   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 18:43:00.160404   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 18:43:00.182956   22837 provision.go:87] duration metric: took 519.836065ms to configureAuth
	I0924 18:43:00.182981   22837 buildroot.go:189] setting minikube options for container-runtime
	I0924 18:43:00.183174   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:43:00.183247   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:43:00.186012   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.186463   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.186490   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.186708   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.186905   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.187085   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.187211   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.187369   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:43:00.187586   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:43:00.187604   22837 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 18:43:00.387241   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 18:43:00.387266   22837 main.go:141] libmachine: Checking connection to Docker...
	I0924 18:43:00.387274   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetURL
	I0924 18:43:00.388619   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Using libvirt version 6000000
	I0924 18:43:00.390883   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.391239   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.391267   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.391387   22837 main.go:141] libmachine: Docker is up and running!
	I0924 18:43:00.391407   22837 main.go:141] libmachine: Reticulating splines...
	I0924 18:43:00.391414   22837 client.go:171] duration metric: took 25.479397424s to LocalClient.Create
	I0924 18:43:00.391440   22837 start.go:167] duration metric: took 25.479470372s to libmachine.API.Create "ha-685475"
	I0924 18:43:00.391451   22837 start.go:293] postStartSetup for "ha-685475-m03" (driver="kvm2")
	I0924 18:43:00.391474   22837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:43:00.391492   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:43:00.391777   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:43:00.391810   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:43:00.393710   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.394015   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.394041   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.394165   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.394339   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.394452   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.394556   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa Username:docker}
	I0924 18:43:00.473009   22837 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 18:43:00.477004   22837 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 18:43:00.477028   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 18:43:00.477094   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 18:43:00.477170   22837 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 18:43:00.477183   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /etc/ssl/certs/109492.pem
	I0924 18:43:00.477284   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 18:43:00.486009   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:43:00.508200   22837 start.go:296] duration metric: took 116.732729ms for postStartSetup
	I0924 18:43:00.508250   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetConfigRaw
	I0924 18:43:00.508816   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetIP
	I0924 18:43:00.511555   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.511901   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.511930   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.512205   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:43:00.512420   22837 start.go:128] duration metric: took 25.618667241s to createHost
	I0924 18:43:00.512456   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:43:00.514675   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.515041   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.515063   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.515191   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.515334   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.515443   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.515542   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.515680   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:43:00.515847   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:43:00.515859   22837 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 18:43:00.611172   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727203380.591704428
	
	I0924 18:43:00.611192   22837 fix.go:216] guest clock: 1727203380.591704428
	I0924 18:43:00.611199   22837 fix.go:229] Guest: 2024-09-24 18:43:00.591704428 +0000 UTC Remote: 2024-09-24 18:43:00.512437538 +0000 UTC m=+144.926822798 (delta=79.26689ms)
	I0924 18:43:00.611227   22837 fix.go:200] guest clock delta is within tolerance: 79.26689ms
	I0924 18:43:00.611257   22837 start.go:83] releasing machines lock for "ha-685475-m03", held for 25.717628791s
	I0924 18:43:00.611280   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:43:00.611536   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetIP
	I0924 18:43:00.614210   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.614585   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.614613   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.617023   22837 out.go:177] * Found network options:
	I0924 18:43:00.618386   22837 out.go:177]   - NO_PROXY=192.168.39.7,192.168.39.17
	W0924 18:43:00.619538   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	W0924 18:43:00.619561   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 18:43:00.619572   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:43:00.620007   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:43:00.620146   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:43:00.620209   22837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 18:43:00.620244   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	W0924 18:43:00.620303   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	W0924 18:43:00.620325   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 18:43:00.620388   22837 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 18:43:00.620402   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:43:00.622880   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.623148   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.623312   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.623338   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.623544   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.623554   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.623575   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.623757   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.623767   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.623887   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.623954   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.624007   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.624095   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa Username:docker}
	I0924 18:43:00.624139   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa Username:docker}
	I0924 18:43:00.854971   22837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 18:43:00.860491   22837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 18:43:00.860570   22837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:43:00.875041   22837 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 18:43:00.875064   22837 start.go:495] detecting cgroup driver to use...
	I0924 18:43:00.875138   22837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 18:43:00.890952   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 18:43:00.903982   22837 docker.go:217] disabling cri-docker service (if available) ...
	I0924 18:43:00.904031   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 18:43:00.917362   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 18:43:00.932669   22837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 18:43:01.042282   22837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 18:43:01.188592   22837 docker.go:233] disabling docker service ...
	I0924 18:43:01.188652   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 18:43:01.202602   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 18:43:01.214596   22837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 18:43:01.362941   22837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 18:43:01.483096   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 18:43:01.496147   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:43:01.513707   22837 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 18:43:01.513773   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.523612   22837 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 18:43:01.523679   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.534669   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.544789   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.554357   22837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:43:01.564046   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.573589   22837 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.589268   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.599288   22837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:43:01.609178   22837 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 18:43:01.609244   22837 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 18:43:01.620961   22837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:43:01.629927   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:43:01.745962   22837 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 18:43:01.839298   22837 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 18:43:01.839385   22837 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 18:43:01.843960   22837 start.go:563] Will wait 60s for crictl version
	I0924 18:43:01.844013   22837 ssh_runner.go:195] Run: which crictl
	I0924 18:43:01.847394   22837 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 18:43:01.883086   22837 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 18:43:01.883173   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:43:01.910912   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:43:01.939648   22837 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 18:43:01.941115   22837 out.go:177]   - env NO_PROXY=192.168.39.7
	I0924 18:43:01.942322   22837 out.go:177]   - env NO_PROXY=192.168.39.7,192.168.39.17
	I0924 18:43:01.943445   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetIP
	I0924 18:43:01.945818   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:01.946123   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:01.946145   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:01.946354   22837 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 18:43:01.950271   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:43:01.961605   22837 mustload.go:65] Loading cluster: ha-685475
	I0924 18:43:01.961842   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:43:01.962136   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:43:01.962173   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:43:01.976744   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45917
	I0924 18:43:01.977209   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:43:01.977706   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:43:01.977723   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:43:01.978053   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:43:01.978214   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:43:01.979876   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:43:01.980161   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:43:01.980194   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:43:01.994159   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42457
	I0924 18:43:01.994450   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:43:01.994902   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:43:01.994924   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:43:01.995194   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:43:01.995386   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:43:01.995533   22837 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475 for IP: 192.168.39.84
	I0924 18:43:01.995545   22837 certs.go:194] generating shared ca certs ...
	I0924 18:43:01.995558   22837 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:43:01.995697   22837 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 18:43:01.995733   22837 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 18:43:01.995744   22837 certs.go:256] generating profile certs ...
	I0924 18:43:01.995811   22837 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key
	I0924 18:43:01.995834   22837 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.f075a721
	I0924 18:43:01.995847   22837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.f075a721 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.17 192.168.39.84 192.168.39.254]
	I0924 18:43:02.322791   22837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.f075a721 ...
	I0924 18:43:02.322837   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.f075a721: {Name:mkebefefa2737490c508c384151059616130ea10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:43:02.323013   22837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.f075a721 ...
	I0924 18:43:02.323026   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.f075a721: {Name:mk784db272b18b5ad01513b873f3e2d227a52a52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:43:02.323095   22837 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.f075a721 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt
	I0924 18:43:02.323227   22837 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.f075a721 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key
	I0924 18:43:02.323344   22837 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key
	I0924 18:43:02.323364   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 18:43:02.323377   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 18:43:02.323390   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 18:43:02.323403   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 18:43:02.323415   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 18:43:02.323427   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 18:43:02.323438   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 18:43:02.338931   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 18:43:02.339017   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 18:43:02.339066   22837 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 18:43:02.339077   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 18:43:02.339099   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 18:43:02.339124   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:43:02.339155   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 18:43:02.339192   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:43:02.339227   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /usr/share/ca-certificates/109492.pem
	I0924 18:43:02.339248   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:43:02.339262   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem -> /usr/share/ca-certificates/10949.pem
	I0924 18:43:02.339300   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:43:02.342163   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:43:02.342483   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:43:02.342502   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:43:02.342764   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:43:02.342966   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:43:02.343115   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:43:02.343267   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:43:02.415201   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0924 18:43:02.420165   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0924 18:43:02.429856   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0924 18:43:02.433796   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0924 18:43:02.444492   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0924 18:43:02.448439   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0924 18:43:02.457436   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0924 18:43:02.461533   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0924 18:43:02.470598   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0924 18:43:02.474412   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0924 18:43:02.483836   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0924 18:43:02.487823   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0924 18:43:02.497111   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:43:02.521054   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 18:43:02.543456   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:43:02.568215   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 18:43:02.592612   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0924 18:43:02.615696   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 18:43:02.644606   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:43:02.666219   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 18:43:02.687592   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 18:43:02.709023   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:43:02.730055   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 18:43:02.751785   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0924 18:43:02.766876   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0924 18:43:02.781877   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0924 18:43:02.801467   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0924 18:43:02.818674   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0924 18:43:02.833922   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0924 18:43:02.850197   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0924 18:43:02.867351   22837 ssh_runner.go:195] Run: openssl version
	I0924 18:43:02.872885   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 18:43:02.883212   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 18:43:02.887607   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 18:43:02.887666   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 18:43:02.893210   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 18:43:02.903216   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:43:02.913130   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:43:02.917524   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:43:02.917603   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:43:02.922951   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:43:02.932615   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 18:43:02.942684   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 18:43:02.946739   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 18:43:02.946793   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 18:43:02.952018   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 18:43:02.962341   22837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:43:02.965981   22837 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 18:43:02.966043   22837 kubeadm.go:934] updating node {m03 192.168.39.84 8443 v1.31.1 crio true true} ...
	I0924 18:43:02.966160   22837 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-685475-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 18:43:02.966192   22837 kube-vip.go:115] generating kube-vip config ...
	I0924 18:43:02.966222   22837 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 18:43:02.981139   22837 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 18:43:02.981202   22837 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0924 18:43:02.981266   22837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:43:02.990568   22837 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0924 18:43:02.990634   22837 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0924 18:43:02.999175   22837 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0924 18:43:02.999208   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 18:43:02.999266   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 18:43:02.999178   22837 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0924 18:43:02.999349   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 18:43:02.999180   22837 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0924 18:43:02.999391   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:43:02.999394   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 18:43:03.003117   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0924 18:43:03.003143   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0924 18:43:03.036084   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 18:43:03.036114   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0924 18:43:03.036142   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0924 18:43:03.036201   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 18:43:03.075645   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0924 18:43:03.075686   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0924 18:43:03.823364   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0924 18:43:03.832908   22837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0924 18:43:03.848931   22837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:43:03.864946   22837 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0924 18:43:03.881201   22837 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 18:43:03.885272   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:43:03.896591   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:43:04.021336   22837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:43:04.039285   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:43:04.039604   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:43:04.039646   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:43:04.055236   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41245
	I0924 18:43:04.055694   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:43:04.056178   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:43:04.056193   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:43:04.056537   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:43:04.056733   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:43:04.056878   22837 start.go:317] joinCluster: &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false
istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:43:04.057018   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0924 18:43:04.057041   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:43:04.059760   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:43:04.060326   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:43:04.060356   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:43:04.060505   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:43:04.060673   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:43:04.060817   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:43:04.060972   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:43:04.197827   22837 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:43:04.197878   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f605s0.ormwy1royddhsvvy --discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-685475-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443"
	I0924 18:43:25.103587   22837 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f605s0.ormwy1royddhsvvy --discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-685475-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443": (20.905680905s)
	I0924 18:43:25.103634   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0924 18:43:25.704348   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-685475-m03 minikube.k8s.io/updated_at=2024_09_24T18_43_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=ha-685475 minikube.k8s.io/primary=false
	I0924 18:43:25.818601   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-685475-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0924 18:43:25.943482   22837 start.go:319] duration metric: took 21.886600064s to joinCluster
	I0924 18:43:25.943562   22837 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:43:25.943868   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:43:25.945143   22837 out.go:177] * Verifying Kubernetes components...
	I0924 18:43:25.946900   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:43:26.202957   22837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:43:26.232194   22837 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:43:26.232534   22837 kapi.go:59] client config for ha-685475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key", CAFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0924 18:43:26.232613   22837 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.7:8443
	I0924 18:43:26.232964   22837 node_ready.go:35] waiting up to 6m0s for node "ha-685475-m03" to be "Ready" ...
	I0924 18:43:26.233091   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:26.233102   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:26.233113   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:26.233119   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:26.236798   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:26.733233   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:26.733259   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:26.733268   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:26.733273   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:26.736350   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:27.234119   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:27.234154   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:27.234165   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:27.234175   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:27.240637   22837 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 18:43:27.733351   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:27.733376   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:27.733387   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:27.733394   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:27.742949   22837 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0924 18:43:28.233173   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:28.233194   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:28.233202   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:28.233206   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:28.236224   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:28.237052   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:28.733360   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:28.733382   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:28.733394   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:28.733399   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:28.736288   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:29.233877   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:29.233916   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:29.233928   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:29.233933   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:29.239798   22837 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 18:43:29.733882   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:29.733906   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:29.733918   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:29.733925   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:29.738420   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:30.233669   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:30.233691   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:30.233699   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:30.233702   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:30.237023   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:30.237689   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:30.733690   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:30.733716   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:30.733726   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:30.733733   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:30.736562   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:31.233177   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:31.233204   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:31.233216   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:31.233221   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:31.237262   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:31.733331   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:31.733356   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:31.733368   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:31.733375   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:31.736291   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:32.234100   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:32.234122   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:32.234130   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:32.234134   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:32.237699   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:32.238691   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:32.734110   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:32.734139   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:32.734148   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:32.734156   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:32.737099   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:33.233554   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:33.233581   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:33.233597   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:33.233602   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:33.236923   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:33.733151   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:33.733173   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:33.733181   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:33.733186   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:33.736346   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:34.234015   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:34.234035   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:34.234045   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:34.234049   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:34.237241   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:34.734163   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:34.734184   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:34.734193   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:34.734196   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:34.737761   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:34.738342   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:35.234001   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:35.234024   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:35.234032   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:35.234036   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:35.237606   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:35.733696   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:35.733720   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:35.733730   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:35.733735   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:35.744612   22837 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0924 18:43:36.233198   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:36.233218   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:36.233226   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:36.233230   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:36.236903   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:36.734073   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:36.734097   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:36.734107   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:36.734113   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:36.737583   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:37.234135   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:37.234158   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:37.234166   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:37.234170   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:37.237414   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:37.238235   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:37.733447   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:37.733464   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:37.733472   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:37.733477   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:37.737157   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:38.233502   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:38.233528   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:38.233541   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:38.233550   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:38.236943   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:38.734024   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:38.734049   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:38.734061   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:38.734068   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:38.737560   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:39.233277   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:39.233299   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:39.233307   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:39.233313   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:39.238242   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:39.238885   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:39.733235   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:39.733259   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:39.733265   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:39.733269   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:39.736692   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:40.233260   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:40.233287   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:40.233300   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:40.233308   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:40.236543   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:40.733171   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:40.733195   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:40.733205   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:40.733212   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:40.740055   22837 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 18:43:41.233389   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:41.233414   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:41.233422   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:41.233428   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:41.238076   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:41.733867   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:41.733888   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:41.733896   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:41.733902   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:41.738641   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:41.739398   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:42.233262   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:42.233290   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:42.233307   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:42.233314   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:42.236491   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:42.733416   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:42.733438   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:42.733445   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:42.733450   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:42.736799   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:43.233279   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:43.233299   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.233308   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.233312   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.238341   22837 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 18:43:43.238906   22837 node_ready.go:49] node "ha-685475-m03" has status "Ready":"True"
	I0924 18:43:43.238924   22837 node_ready.go:38] duration metric: took 17.005939201s for node "ha-685475-m03" to be "Ready" ...
	I0924 18:43:43.238932   22837 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:43:43.239003   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:43:43.239014   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.239021   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.239028   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.244370   22837 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 18:43:43.251285   22837 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.251369   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fchhl
	I0924 18:43:43.251380   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.251391   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.251397   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.254058   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.254668   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:43.254684   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.254696   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.254705   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.256747   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.257336   22837 pod_ready.go:93] pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:43.257356   22837 pod_ready.go:82] duration metric: took 6.045735ms for pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.257366   22837 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.257424   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jf7wr
	I0924 18:43:43.257436   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.257446   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.257453   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.259853   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.260510   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:43.260535   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.260545   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.260560   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.262661   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.263075   22837 pod_ready.go:93] pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:43.263089   22837 pod_ready.go:82] duration metric: took 5.713062ms for pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.263099   22837 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.263153   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475
	I0924 18:43:43.263164   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.263173   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.263181   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.265421   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.266025   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:43.266041   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.266051   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.266056   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.268154   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.268655   22837 pod_ready.go:93] pod "etcd-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:43.268677   22837 pod_ready.go:82] duration metric: took 5.571952ms for pod "etcd-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.268686   22837 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.268729   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475-m02
	I0924 18:43:43.268736   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.268743   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.268748   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.270920   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.271534   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:43.271559   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.271569   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.271575   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.273706   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.274155   22837 pod_ready.go:93] pod "etcd-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:43.274174   22837 pod_ready.go:82] duration metric: took 5.482358ms for pod "etcd-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.274182   22837 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.433530   22837 request.go:632] Waited for 159.301092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475-m03
	I0924 18:43:43.433597   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475-m03
	I0924 18:43:43.433607   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.433614   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.433620   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.436812   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:43.633686   22837 request.go:632] Waited for 196.323402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:43.633768   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:43.633775   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.633786   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.633789   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.636913   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:43.637664   22837 pod_ready.go:93] pod "etcd-ha-685475-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:43.637687   22837 pod_ready.go:82] duration metric: took 363.498352ms for pod "etcd-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.637711   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.833926   22837 request.go:632] Waited for 196.128909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475
	I0924 18:43:43.833999   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475
	I0924 18:43:43.834017   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.834032   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.834048   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.837007   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:44.033945   22837 request.go:632] Waited for 196.25ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:44.033995   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:44.034000   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:44.034007   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:44.034013   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:44.037183   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:44.037998   22837 pod_ready.go:93] pod "kube-apiserver-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:44.038015   22837 pod_ready.go:82] duration metric: took 400.293259ms for pod "kube-apiserver-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:44.038024   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:44.233670   22837 request.go:632] Waited for 195.573608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m02
	I0924 18:43:44.233746   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m02
	I0924 18:43:44.233751   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:44.233759   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:44.233770   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:44.236800   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:44.434104   22837 request.go:632] Waited for 196.353101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:44.434150   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:44.434155   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:44.434162   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:44.434166   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:44.437459   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:44.438061   22837 pod_ready.go:93] pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:44.438077   22837 pod_ready.go:82] duration metric: took 400.046958ms for pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:44.438087   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:44.634247   22837 request.go:632] Waited for 196.068994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m03
	I0924 18:43:44.634307   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m03
	I0924 18:43:44.634314   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:44.634323   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:44.634333   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:44.637761   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:44.834009   22837 request.go:632] Waited for 195.341273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:44.834062   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:44.834067   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:44.834075   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:44.834079   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:44.837377   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:44.838102   22837 pod_ready.go:93] pod "kube-apiserver-ha-685475-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:44.838124   22837 pod_ready.go:82] duration metric: took 400.029506ms for pod "kube-apiserver-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:44.838137   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:45.033524   22837 request.go:632] Waited for 195.317742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475
	I0924 18:43:45.033577   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475
	I0924 18:43:45.033583   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:45.033597   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:45.033602   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:45.038542   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:45.233396   22837 request.go:632] Waited for 194.275856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:45.233476   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:45.233483   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:45.233494   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:45.233499   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:45.237836   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:45.238292   22837 pod_ready.go:93] pod "kube-controller-manager-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:45.238309   22837 pod_ready.go:82] duration metric: took 400.16501ms for pod "kube-controller-manager-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:45.238319   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:45.434068   22837 request.go:632] Waited for 195.691023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m02
	I0924 18:43:45.434126   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m02
	I0924 18:43:45.434131   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:45.434138   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:45.434142   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:45.437774   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:45.634002   22837 request.go:632] Waited for 195.223479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:45.634063   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:45.634070   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:45.634080   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:45.634086   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:45.637445   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:45.638048   22837 pod_ready.go:93] pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:45.638072   22837 pod_ready.go:82] duration metric: took 399.746216ms for pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:45.638086   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:45.833552   22837 request.go:632] Waited for 195.400527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m03
	I0924 18:43:45.833619   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m03
	I0924 18:43:45.833626   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:45.833637   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:45.833645   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:45.837253   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:46.033410   22837 request.go:632] Waited for 195.28753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:46.033466   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:46.033471   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:46.033479   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:46.033484   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:46.036819   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:46.037577   22837 pod_ready.go:93] pod "kube-controller-manager-ha-685475-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:46.037601   22837 pod_ready.go:82] duration metric: took 399.507145ms for pod "kube-controller-manager-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:46.037614   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b8x2w" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:46.233664   22837 request.go:632] Waited for 195.987183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8x2w
	I0924 18:43:46.233730   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8x2w
	I0924 18:43:46.233736   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:46.233744   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:46.233751   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:46.236704   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:46.433753   22837 request.go:632] Waited for 196.36056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:46.433836   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:46.433849   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:46.433858   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:46.433864   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:46.436885   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:46.437346   22837 pod_ready.go:93] pod "kube-proxy-b8x2w" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:46.437362   22837 pod_ready.go:82] duration metric: took 399.741929ms for pod "kube-proxy-b8x2w" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:46.437371   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dlr8f" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:46.633383   22837 request.go:632] Waited for 195.935746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlr8f
	I0924 18:43:46.633452   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlr8f
	I0924 18:43:46.633459   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:46.633467   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:46.633472   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:46.636654   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:46.833848   22837 request.go:632] Waited for 196.369969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:46.833916   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:46.833926   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:46.833936   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:46.833944   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:46.836871   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:46.837369   22837 pod_ready.go:93] pod "kube-proxy-dlr8f" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:46.837390   22837 pod_ready.go:82] duration metric: took 400.012248ms for pod "kube-proxy-dlr8f" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:46.837402   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mzlfj" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:47.033325   22837 request.go:632] Waited for 195.841602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mzlfj
	I0924 18:43:47.033432   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mzlfj
	I0924 18:43:47.033444   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:47.033452   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:47.033455   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:47.037080   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:47.234175   22837 request.go:632] Waited for 196.377747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:47.234251   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:47.234257   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:47.234266   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:47.234278   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:47.238255   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:47.238898   22837 pod_ready.go:93] pod "kube-proxy-mzlfj" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:47.238919   22837 pod_ready.go:82] duration metric: took 401.508549ms for pod "kube-proxy-mzlfj" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:47.238933   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:47.433952   22837 request.go:632] Waited for 194.91975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475
	I0924 18:43:47.434033   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475
	I0924 18:43:47.434044   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:47.434055   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:47.434064   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:47.437332   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:47.633347   22837 request.go:632] Waited for 195.287392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:47.633423   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:47.633433   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:47.633441   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:47.633445   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:47.636933   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:47.637777   22837 pod_ready.go:93] pod "kube-scheduler-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:47.637815   22837 pod_ready.go:82] duration metric: took 398.871168ms for pod "kube-scheduler-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:47.637829   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:47.834176   22837 request.go:632] Waited for 196.271361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m02
	I0924 18:43:47.834232   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m02
	I0924 18:43:47.834238   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:47.834246   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:47.834250   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:47.836928   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:48.033993   22837 request.go:632] Waited for 196.330346ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:48.034058   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:48.034064   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.034074   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.034084   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.037490   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:48.038369   22837 pod_ready.go:93] pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:48.038391   22837 pod_ready.go:82] duration metric: took 400.547551ms for pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:48.038404   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:48.233397   22837 request.go:632] Waited for 194.929707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m03
	I0924 18:43:48.233454   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m03
	I0924 18:43:48.233459   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.233467   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.233471   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.236987   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:48.433994   22837 request.go:632] Waited for 196.397643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:48.434055   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:48.434062   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.434073   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.434081   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.437996   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:48.438514   22837 pod_ready.go:93] pod "kube-scheduler-ha-685475-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:48.438617   22837 pod_ready.go:82] duration metric: took 400.123712ms for pod "kube-scheduler-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:48.438680   22837 pod_ready.go:39] duration metric: took 5.199733297s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:43:48.438705   22837 api_server.go:52] waiting for apiserver process to appear ...
	I0924 18:43:48.438774   22837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:43:48.452044   22837 api_server.go:72] duration metric: took 22.508447307s to wait for apiserver process to appear ...
	I0924 18:43:48.452066   22837 api_server.go:88] waiting for apiserver healthz status ...
	I0924 18:43:48.452082   22837 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0924 18:43:48.457867   22837 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0924 18:43:48.457929   22837 round_trippers.go:463] GET https://192.168.39.7:8443/version
	I0924 18:43:48.457937   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.457945   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.457950   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.458795   22837 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0924 18:43:48.458877   22837 api_server.go:141] control plane version: v1.31.1
	I0924 18:43:48.458893   22837 api_server.go:131] duration metric: took 6.820487ms to wait for apiserver health ...
	I0924 18:43:48.458900   22837 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 18:43:48.634297   22837 request.go:632] Waited for 175.332984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:43:48.634358   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:43:48.634374   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.634381   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.634385   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.640434   22837 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 18:43:48.648701   22837 system_pods.go:59] 24 kube-system pods found
	I0924 18:43:48.648727   22837 system_pods.go:61] "coredns-7c65d6cfc9-fchhl" [dc58fefc-6210-4b70-bd0d-dbf5b093e09a] Running
	I0924 18:43:48.648734   22837 system_pods.go:61] "coredns-7c65d6cfc9-jf7wr" [a616493e-082e-4ae6-8e12-8c4a2b37a985] Running
	I0924 18:43:48.648739   22837 system_pods.go:61] "etcd-ha-685475" [f76413e6-46f1-4914-9ba4-719c8f2b098b] Running
	I0924 18:43:48.648744   22837 system_pods.go:61] "etcd-ha-685475-m02" [f37ad824-aa9c-42e9-b9fa-82423aab2a30] Running
	I0924 18:43:48.648749   22837 system_pods.go:61] "etcd-ha-685475-m03" [aa636f08-f0af-4453-b8fd-2637f9edce98] Running
	I0924 18:43:48.648753   22837 system_pods.go:61] "kindnet-7w5dn" [dc2e3477-1c01-4af2-a8b5-0433c75dc3d1] Running
	I0924 18:43:48.648758   22837 system_pods.go:61] "kindnet-ms6qb" [60485f55-3830-4897-b38e-55779662b999] Running
	I0924 18:43:48.648764   22837 system_pods.go:61] "kindnet-pwvfj" [e47e9124-c023-41f2-8b05-5fde3cf09dc1] Running
	I0924 18:43:48.648769   22837 system_pods.go:61] "kube-apiserver-ha-685475" [f7dc1ef7-fba6-48c4-8868-de5eccdbbea3] Running
	I0924 18:43:48.648778   22837 system_pods.go:61] "kube-apiserver-ha-685475-m02" [96b5dd69-0cc4-42d9-a42e-b1665ab1890a] Running
	I0924 18:43:48.648786   22837 system_pods.go:61] "kube-apiserver-ha-685475-m03" [f6efa935-e9a5-4f21-8c4c-571bbe7ab65d] Running
	I0924 18:43:48.648794   22837 system_pods.go:61] "kube-controller-manager-ha-685475" [3d40caef-e1c5-4e4b-9908-cf2767bb686f] Running
	I0924 18:43:48.648799   22837 system_pods.go:61] "kube-controller-manager-ha-685475-m02" [0fb0ca36-0340-49f7-8c5d-acf933c181ad] Running
	I0924 18:43:48.648804   22837 system_pods.go:61] "kube-controller-manager-ha-685475-m03" [0a1e0dac-494b-4892-b945-bf45d87baa4d] Running
	I0924 18:43:48.648810   22837 system_pods.go:61] "kube-proxy-b8x2w" [95e65f4e-7461-479a-8743-ce4f891abfcf] Running
	I0924 18:43:48.648818   22837 system_pods.go:61] "kube-proxy-dlr8f" [e463fdb8-b27f-4e4a-8887-6534c92a21aa] Running
	I0924 18:43:48.648824   22837 system_pods.go:61] "kube-proxy-mzlfj" [2fcf9e88-63de-45cc-b82a-87f1589f9565] Running
	I0924 18:43:48.648829   22837 system_pods.go:61] "kube-scheduler-ha-685475" [b82f1f3f-4c7a-49b3-9dab-ba6dfdd3c2ed] Running
	I0924 18:43:48.648835   22837 system_pods.go:61] "kube-scheduler-ha-685475-m02" [53e1a4b3-4e3a-4d14-9cdf-eedbf83877b4] Running
	I0924 18:43:48.648848   22837 system_pods.go:61] "kube-scheduler-ha-685475-m03" [eee036e1-933e-42d1-9b3d-63f6f13ac6a3] Running
	I0924 18:43:48.648855   22837 system_pods.go:61] "kube-vip-ha-685475" [ad2ed915-5276-4ba2-b097-df9074e8c2ef] Running
	I0924 18:43:48.648860   22837 system_pods.go:61] "kube-vip-ha-685475-m02" [916f0d4d-70d4-4347-9337-84e5c77ca834] Running
	I0924 18:43:48.648867   22837 system_pods.go:61] "kube-vip-ha-685475-m03" [a7e9d21c-45e2-4bcf-9e84-6c2c351d2f68] Running
	I0924 18:43:48.648873   22837 system_pods.go:61] "storage-provisioner" [e0f5497a-ae6d-4051-b1bc-c84c91d0fd12] Running
	I0924 18:43:48.648881   22837 system_pods.go:74] duration metric: took 189.974541ms to wait for pod list to return data ...
	I0924 18:43:48.648894   22837 default_sa.go:34] waiting for default service account to be created ...
	I0924 18:43:48.834315   22837 request.go:632] Waited for 185.353374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0924 18:43:48.834369   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0924 18:43:48.834374   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.834382   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.834385   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.838136   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:48.838236   22837 default_sa.go:45] found service account: "default"
	I0924 18:43:48.838249   22837 default_sa.go:55] duration metric: took 189.347233ms for default service account to be created ...
	I0924 18:43:48.838257   22837 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 18:43:49.033856   22837 request.go:632] Waited for 195.536486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:43:49.033925   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:43:49.033930   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:49.033939   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:49.033944   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:49.040875   22837 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 18:43:49.047492   22837 system_pods.go:86] 24 kube-system pods found
	I0924 18:43:49.047517   22837 system_pods.go:89] "coredns-7c65d6cfc9-fchhl" [dc58fefc-6210-4b70-bd0d-dbf5b093e09a] Running
	I0924 18:43:49.047522   22837 system_pods.go:89] "coredns-7c65d6cfc9-jf7wr" [a616493e-082e-4ae6-8e12-8c4a2b37a985] Running
	I0924 18:43:49.047526   22837 system_pods.go:89] "etcd-ha-685475" [f76413e6-46f1-4914-9ba4-719c8f2b098b] Running
	I0924 18:43:49.047531   22837 system_pods.go:89] "etcd-ha-685475-m02" [f37ad824-aa9c-42e9-b9fa-82423aab2a30] Running
	I0924 18:43:49.047535   22837 system_pods.go:89] "etcd-ha-685475-m03" [aa636f08-f0af-4453-b8fd-2637f9edce98] Running
	I0924 18:43:49.047538   22837 system_pods.go:89] "kindnet-7w5dn" [dc2e3477-1c01-4af2-a8b5-0433c75dc3d1] Running
	I0924 18:43:49.047541   22837 system_pods.go:89] "kindnet-ms6qb" [60485f55-3830-4897-b38e-55779662b999] Running
	I0924 18:43:49.047544   22837 system_pods.go:89] "kindnet-pwvfj" [e47e9124-c023-41f2-8b05-5fde3cf09dc1] Running
	I0924 18:43:49.047549   22837 system_pods.go:89] "kube-apiserver-ha-685475" [f7dc1ef7-fba6-48c4-8868-de5eccdbbea3] Running
	I0924 18:43:49.047553   22837 system_pods.go:89] "kube-apiserver-ha-685475-m02" [96b5dd69-0cc4-42d9-a42e-b1665ab1890a] Running
	I0924 18:43:49.047556   22837 system_pods.go:89] "kube-apiserver-ha-685475-m03" [f6efa935-e9a5-4f21-8c4c-571bbe7ab65d] Running
	I0924 18:43:49.047560   22837 system_pods.go:89] "kube-controller-manager-ha-685475" [3d40caef-e1c5-4e4b-9908-cf2767bb686f] Running
	I0924 18:43:49.047563   22837 system_pods.go:89] "kube-controller-manager-ha-685475-m02" [0fb0ca36-0340-49f7-8c5d-acf933c181ad] Running
	I0924 18:43:49.047567   22837 system_pods.go:89] "kube-controller-manager-ha-685475-m03" [0a1e0dac-494b-4892-b945-bf45d87baa4d] Running
	I0924 18:43:49.047570   22837 system_pods.go:89] "kube-proxy-b8x2w" [95e65f4e-7461-479a-8743-ce4f891abfcf] Running
	I0924 18:43:49.047574   22837 system_pods.go:89] "kube-proxy-dlr8f" [e463fdb8-b27f-4e4a-8887-6534c92a21aa] Running
	I0924 18:43:49.047577   22837 system_pods.go:89] "kube-proxy-mzlfj" [2fcf9e88-63de-45cc-b82a-87f1589f9565] Running
	I0924 18:43:49.047580   22837 system_pods.go:89] "kube-scheduler-ha-685475" [b82f1f3f-4c7a-49b3-9dab-ba6dfdd3c2ed] Running
	I0924 18:43:49.047583   22837 system_pods.go:89] "kube-scheduler-ha-685475-m02" [53e1a4b3-4e3a-4d14-9cdf-eedbf83877b4] Running
	I0924 18:43:49.047586   22837 system_pods.go:89] "kube-scheduler-ha-685475-m03" [eee036e1-933e-42d1-9b3d-63f6f13ac6a3] Running
	I0924 18:43:49.047589   22837 system_pods.go:89] "kube-vip-ha-685475" [ad2ed915-5276-4ba2-b097-df9074e8c2ef] Running
	I0924 18:43:49.047591   22837 system_pods.go:89] "kube-vip-ha-685475-m02" [916f0d4d-70d4-4347-9337-84e5c77ca834] Running
	I0924 18:43:49.047594   22837 system_pods.go:89] "kube-vip-ha-685475-m03" [a7e9d21c-45e2-4bcf-9e84-6c2c351d2f68] Running
	I0924 18:43:49.047597   22837 system_pods.go:89] "storage-provisioner" [e0f5497a-ae6d-4051-b1bc-c84c91d0fd12] Running
	I0924 18:43:49.047603   22837 system_pods.go:126] duration metric: took 209.341697ms to wait for k8s-apps to be running ...
	I0924 18:43:49.047611   22837 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 18:43:49.047657   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:43:49.065856   22837 system_svc.go:56] duration metric: took 18.234674ms WaitForService to wait for kubelet
	I0924 18:43:49.065885   22837 kubeadm.go:582] duration metric: took 23.12228905s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:43:49.065905   22837 node_conditions.go:102] verifying NodePressure condition ...
	I0924 18:43:49.234361   22837 request.go:632] Waited for 168.355831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes
	I0924 18:43:49.234409   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes
	I0924 18:43:49.234415   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:49.234422   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:49.234427   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:49.238548   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:49.242121   22837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:43:49.242144   22837 node_conditions.go:123] node cpu capacity is 2
	I0924 18:43:49.242160   22837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:43:49.242164   22837 node_conditions.go:123] node cpu capacity is 2
	I0924 18:43:49.242167   22837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:43:49.242170   22837 node_conditions.go:123] node cpu capacity is 2
	I0924 18:43:49.242174   22837 node_conditions.go:105] duration metric: took 176.264509ms to run NodePressure ...
	I0924 18:43:49.242184   22837 start.go:241] waiting for startup goroutines ...
	I0924 18:43:49.242210   22837 start.go:255] writing updated cluster config ...
	I0924 18:43:49.242507   22837 ssh_runner.go:195] Run: rm -f paused
	I0924 18:43:49.294738   22837 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 18:43:49.297711   22837 out.go:177] * Done! kubectl is now configured to use "ha-685475" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.511239716Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5aaff394-20eb-4f0e-a2ec-e7cd4791f947 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.511953748Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5e67c68f-8a31-4b58-beb6-ed4e4775a1a1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.512186148Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-hmkfk,Uid:8d4c0c92-3c76-478a-b298-c9a7ab9e3995,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727203430465944293,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T18:43:50.152786202Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5cb07ffbc15c1db48161a46e1ce4a69e3d024a8ff62c886643723089f33e75f7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1727203297395275336,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-24T18:41:37.070772276Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-fchhl,Uid:dc58fefc-6210-4b70-bd0d-dbf5b093e09a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727203297394099609,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T18:41:37.071451982Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-jf7wr,Uid:a616493e-082e-4ae6-8e12-8c4a2b37a985,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1727203297371515704,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T18:41:37.062449713Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&PodSandboxMetadata{Name:kube-proxy-b8x2w,Uid:95e65f4e-7461-479a-8743-ce4f891abfcf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727203285316892251,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-09-24T18:41:23.499383915Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&PodSandboxMetadata{Name:kindnet-ms6qb,Uid:60485f55-3830-4897-b38e-55779662b999,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727203285287384601,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T18:41:23.478653231Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:480a4fc4d507ff4484472442542def1cc671c1320151a75812f1b0b2d858bf48,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-685475,Uid:76c9ba6147bd78ec5c916c82e075c53f,Namespace:kube-system,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1727203272838837962,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 76c9ba6147bd78ec5c916c82e075c53f,kubernetes.io/config.seen: 2024-09-24T18:41:12.357260248Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-685475,Uid:590516ed80b227ea320a474c3a9ebfaf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727203272830634992,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfa
f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 590516ed80b227ea320a474c3a9ebfaf,kubernetes.io/config.seen: 2024-09-24T18:41:12.357261147Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8b6709d2b9d03b71e71df6dad09e42d52601e38a0e0ee46ecd31f5480fd75d19,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-685475,Uid:8b1b3e358bc7b86c05e843e83024d248,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727203272820377964,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b1b3e358bc7b86c05e843e83024d248,},Annotations:map[string]string{kubernetes.io/config.hash: 8b1b3e358bc7b86c05e843e83024d248,kubernetes.io/config.seen: 2024-09-24T18:41:12.357262119Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&PodSandboxMetadata{Name:etcd-ha-685475,Uid:c4f76c4b
882e3909126cd21d4982493e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727203272809725695,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.7:2379,kubernetes.io/config.hash: c4f76c4b882e3909126cd21d4982493e,kubernetes.io/config.seen: 2024-09-24T18:41:12.357255124Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2ee65b29ae3d23587d2aa4aad308fca9a43ac64a3c3c891ebb43fab609b64f7a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-685475,Uid:27e7d23a9b6fbfe2d9aa17cf12d65a47,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727203272807227463,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.7:8443,kubernetes.io/config.hash: 27e7d23a9b6fbfe2d9aa17cf12d65a47,kubernetes.io/config.seen: 2024-09-24T18:41:12.357259102Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5e67c68f-8a31-4b58-beb6-ed4e4775a1a1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.512788078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c5d20c6-f68c-4ec2-9007-573c627c2852 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.512893076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c5d20c6-f68c-4ec2-9007-573c627c2852 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.513112201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203432776977765,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101ffaf02677078c4490807a7a38b8b8077a8323b00e1ef6c7c52dfdf7c323e,PodSandboxId:5cb07ffbc15c1db48161a46e1ce4a69e3d024a8ff62c886643723089f33e75f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203297582724205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297608075068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297571303610,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-62
10-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17272032
85606256261,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203285407492796,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f5664db9017d6a2a0453e30fcd1e13eb349124974c1e07a2d0ba8f50e4c50a,PodSandboxId:8b6709d2b9d03b71e71df6dad09e42d52601e38a0e0ee46ecd31f5480fd75d19,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203275865927706,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b1b3e358bc7b86c05e843e83024d248,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203273109777744,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203273059329172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8,PodSandboxId:480a4fc4d507ff4484472442542def1cc671c1320151a75812f1b0b2d858bf48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203273017878673,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad,PodSandboxId:2ee65b29ae3d23587d2aa4aad308fca9a43ac64a3c3c891ebb43fab609b64f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203272969975750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c5d20c6-f68c-4ec2-9007-573c627c2852 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.514025889Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c61a5e39-9261-4eb0-a304-f311d782a405 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.514419932Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203647514402945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c61a5e39-9261-4eb0-a304-f311d782a405 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.515020255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5e1753a-ce73-49fe-949f-13d98b68c908 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.515098746Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5e1753a-ce73-49fe-949f-13d98b68c908 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.515305207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203432776977765,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101ffaf02677078c4490807a7a38b8b8077a8323b00e1ef6c7c52dfdf7c323e,PodSandboxId:5cb07ffbc15c1db48161a46e1ce4a69e3d024a8ff62c886643723089f33e75f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203297582724205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297608075068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297571303610,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-62
10-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17272032
85606256261,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203285407492796,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f5664db9017d6a2a0453e30fcd1e13eb349124974c1e07a2d0ba8f50e4c50a,PodSandboxId:8b6709d2b9d03b71e71df6dad09e42d52601e38a0e0ee46ecd31f5480fd75d19,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203275865927706,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b1b3e358bc7b86c05e843e83024d248,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203273109777744,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203273059329172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8,PodSandboxId:480a4fc4d507ff4484472442542def1cc671c1320151a75812f1b0b2d858bf48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203273017878673,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad,PodSandboxId:2ee65b29ae3d23587d2aa4aad308fca9a43ac64a3c3c891ebb43fab609b64f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203272969975750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5e1753a-ce73-49fe-949f-13d98b68c908 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.549946924Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=799cca2a-0118-4730-bf8c-bf5c52d38f16 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.550032843Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=799cca2a-0118-4730-bf8c-bf5c52d38f16 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.551170425Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=810fe3ba-9764-4f4b-9751-dba632539844 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.551605145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203647551583931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=810fe3ba-9764-4f4b-9751-dba632539844 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.552167674Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b0c8128-22ba-473b-8bb0-9affe9bd4034 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.552231018Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b0c8128-22ba-473b-8bb0-9affe9bd4034 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.552448360Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203432776977765,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101ffaf02677078c4490807a7a38b8b8077a8323b00e1ef6c7c52dfdf7c323e,PodSandboxId:5cb07ffbc15c1db48161a46e1ce4a69e3d024a8ff62c886643723089f33e75f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203297582724205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297608075068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297571303610,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-62
10-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17272032
85606256261,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203285407492796,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f5664db9017d6a2a0453e30fcd1e13eb349124974c1e07a2d0ba8f50e4c50a,PodSandboxId:8b6709d2b9d03b71e71df6dad09e42d52601e38a0e0ee46ecd31f5480fd75d19,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203275865927706,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b1b3e358bc7b86c05e843e83024d248,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203273109777744,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203273059329172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8,PodSandboxId:480a4fc4d507ff4484472442542def1cc671c1320151a75812f1b0b2d858bf48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203273017878673,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad,PodSandboxId:2ee65b29ae3d23587d2aa4aad308fca9a43ac64a3c3c891ebb43fab609b64f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203272969975750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b0c8128-22ba-473b-8bb0-9affe9bd4034 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.596267348Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=975c8bf1-7046-4898-a3e2-a59a808cb892 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.596350520Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=975c8bf1-7046-4898-a3e2-a59a808cb892 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.597529390Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b88fecdd-308c-4be9-b4ac-0582826f6749 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.597978904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203647597956960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b88fecdd-308c-4be9-b4ac-0582826f6749 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.598430733Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=addea1db-fc84-4a51-9c85-896ddaac03f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.598526695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=addea1db-fc84-4a51-9c85-896ddaac03f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:27 ha-685475 crio[662]: time="2024-09-24 18:47:27.598749932Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203432776977765,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101ffaf02677078c4490807a7a38b8b8077a8323b00e1ef6c7c52dfdf7c323e,PodSandboxId:5cb07ffbc15c1db48161a46e1ce4a69e3d024a8ff62c886643723089f33e75f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203297582724205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297608075068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297571303610,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-62
10-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17272032
85606256261,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203285407492796,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f5664db9017d6a2a0453e30fcd1e13eb349124974c1e07a2d0ba8f50e4c50a,PodSandboxId:8b6709d2b9d03b71e71df6dad09e42d52601e38a0e0ee46ecd31f5480fd75d19,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203275865927706,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b1b3e358bc7b86c05e843e83024d248,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203273109777744,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203273059329172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8,PodSandboxId:480a4fc4d507ff4484472442542def1cc671c1320151a75812f1b0b2d858bf48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203273017878673,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad,PodSandboxId:2ee65b29ae3d23587d2aa4aad308fca9a43ac64a3c3c891ebb43fab609b64f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203272969975750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=addea1db-fc84-4a51-9c85-896ddaac03f8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9b86d48937d84       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2517ecd8d61cd       busybox-7dff88458-hmkfk
	2c7b4241a9158       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   c2c9f0a12f919       coredns-7c65d6cfc9-jf7wr
	7101ffaf02677       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   5cb07ffbc15c1       storage-provisioner
	75aac96a2239b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   9f53b2b4e4e29       coredns-7c65d6cfc9-fchhl
	709da73468c82       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   6c65efd736505       kindnet-ms6qb
	9ea87ecceac1c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   bbb4cec818818       kube-proxy-b8x2w
	40f5664db9017       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   8b6709d2b9d03       kube-vip-ha-685475
	e62a02dab3075       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   9ade6d826e125       kube-scheduler-ha-685475
	efe5b6f3ceb69       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   5fa1209cd75b8       etcd-ha-685475
	5686da29f7aac       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   480a4fc4d507f       kube-controller-manager-ha-685475
	838b3cda70bf1       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   2ee65b29ae3d2       kube-apiserver-ha-685475
	
	
	==> coredns [2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235] <==
	[INFO] 10.244.2.2:43478 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117921s
	[INFO] 10.244.0.4:52601 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001246s
	[INFO] 10.244.0.4:57647 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118972s
	[INFO] 10.244.0.4:59286 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001434237s
	[INFO] 10.244.0.4:55987 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082081s
	[INFO] 10.244.1.2:44949 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002196411s
	[INFO] 10.244.1.2:57646 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132442s
	[INFO] 10.244.1.2:45986 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001533759s
	[INFO] 10.244.1.2:56859 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159221s
	[INFO] 10.244.1.2:47730 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122802s
	[INFO] 10.244.2.2:49373 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174893s
	[INFO] 10.244.0.4:52492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008787s
	[INFO] 10.244.0.4:33570 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049583s
	[INFO] 10.244.0.4:35717 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000036153s
	[INFO] 10.244.1.2:39348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000262289s
	[INFO] 10.244.1.2:44144 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000216176s
	[INFO] 10.244.1.2:37532 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00017928s
	[INFO] 10.244.2.2:34536 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139562s
	[INFO] 10.244.0.4:43378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108735s
	[INFO] 10.244.0.4:50975 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139299s
	[INFO] 10.244.0.4:36798 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091581s
	[INFO] 10.244.1.2:55450 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136524s
	[INFO] 10.244.1.2:46887 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00019253s
	[INFO] 10.244.1.2:39275 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113225s
	[INFO] 10.244.1.2:44182 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097101s
	
	
	==> coredns [75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f] <==
	[INFO] 10.244.2.2:51539 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.04751056s
	[INFO] 10.244.2.2:56073 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013178352s
	[INFO] 10.244.0.4:46583 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000099115s
	[INFO] 10.244.1.2:39503 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018791s
	[INFO] 10.244.1.2:56200 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000107364s
	[INFO] 10.244.1.2:50181 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000477328s
	[INFO] 10.244.2.2:48517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149349s
	[INFO] 10.244.2.2:37426 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161156s
	[INFO] 10.244.2.2:51780 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245454s
	[INFO] 10.244.0.4:37360 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00192766s
	[INFO] 10.244.0.4:49282 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067708s
	[INFO] 10.244.0.4:50475 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000049077s
	[INFO] 10.244.0.4:42734 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103381s
	[INFO] 10.244.1.2:34090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126966s
	[INFO] 10.244.1.2:49474 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199973s
	[INFO] 10.244.1.2:47488 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080517s
	[INFO] 10.244.2.2:58501 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129358s
	[INFO] 10.244.2.2:35831 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166592s
	[INFO] 10.244.2.2:46260 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105019s
	[INFO] 10.244.0.4:34512 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070631s
	[INFO] 10.244.1.2:40219 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095437s
	[INFO] 10.244.2.2:45584 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000263954s
	[INFO] 10.244.2.2:45346 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105804s
	[INFO] 10.244.2.2:33451 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099783s
	[INFO] 10.244.0.4:54263 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102026s
	
	
	==> describe nodes <==
	Name:               ha-685475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T18_41_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:41:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:47:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:44:23 +0000   Tue, 24 Sep 2024 18:41:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:44:23 +0000   Tue, 24 Sep 2024 18:41:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:44:23 +0000   Tue, 24 Sep 2024 18:41:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:44:23 +0000   Tue, 24 Sep 2024 18:41:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-685475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d6728db94ca4a90af6f3c76683b52c2
	  System UUID:                7d6728db-94ca-4a90-af6f-3c76683b52c2
	  Boot ID:                    d6338982-1afe-44d6-a104-48e80df984ae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hmkfk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 coredns-7c65d6cfc9-fchhl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m4s
	  kube-system                 coredns-7c65d6cfc9-jf7wr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m4s
	  kube-system                 etcd-ha-685475                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m10s
	  kube-system                 kindnet-ms6qb                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m4s
	  kube-system                 kube-apiserver-ha-685475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-ha-685475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-proxy-b8x2w                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-scheduler-ha-685475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-vip-ha-685475                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m2s   kube-proxy       
	  Normal  Starting                 6m8s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m8s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m8s   kubelet          Node ha-685475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s   kubelet          Node ha-685475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s   kubelet          Node ha-685475 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m5s   node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	  Normal  NodeReady                5m50s  kubelet          Node ha-685475 status is now: NodeReady
	  Normal  RegisteredNode           5m10s  node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	  Normal  RegisteredNode           3m56s  node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	
	
	Name:               ha-685475-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T18_42_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:42:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:44:53 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 24 Sep 2024 18:44:12 +0000   Tue, 24 Sep 2024 18:45:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 24 Sep 2024 18:44:12 +0000   Tue, 24 Sep 2024 18:45:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 24 Sep 2024 18:44:12 +0000   Tue, 24 Sep 2024 18:45:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 24 Sep 2024 18:44:12 +0000   Tue, 24 Sep 2024 18:45:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-685475-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad56c26961cf4d94852f19122c4c499b
	  System UUID:                ad56c269-61cf-4d94-852f-19122c4c499b
	  Boot ID:                    e772e23b-db48-4470-a822-ef2e8ff749c3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w6g8l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-685475-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m16s
	  kube-system                 kindnet-pwvfj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m17s
	  kube-system                 kube-apiserver-ha-685475-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-controller-manager-ha-685475-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-proxy-dlr8f                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-scheduler-ha-685475-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-vip-ha-685475-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node ha-685475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node ha-685475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x7 over 5m18s)  kubelet          Node ha-685475-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m15s                  node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  NodeNotReady             111s                   node-controller  Node ha-685475-m02 status is now: NodeNotReady
	
	
	Name:               ha-685475-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T18_43_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:43:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:47:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:44:24 +0000   Tue, 24 Sep 2024 18:43:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:44:24 +0000   Tue, 24 Sep 2024 18:43:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:44:24 +0000   Tue, 24 Sep 2024 18:43:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:44:24 +0000   Tue, 24 Sep 2024 18:43:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-685475-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 666f55d24f014a7598addca9cb06654f
	  System UUID:                666f55d2-4f01-4a75-98ad-dca9cb06654f
	  Boot ID:                    4a6f3fd5-8906-4dce-b1f1-42fe5e6d144d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gksmx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-685475-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m3s
	  kube-system                 kindnet-7w5dn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m5s
	  kube-system                 kube-apiserver-ha-685475-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-controller-manager-ha-685475-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-proxy-mzlfj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-ha-685475-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-vip-ha-685475-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m5s (x8 over 4m5s)  kubelet          Node ha-685475-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x8 over 4m5s)  kubelet          Node ha-685475-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x7 over 4m5s)  kubelet          Node ha-685475-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node ha-685475-m03 event: Registered Node ha-685475-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-685475-m03 event: Registered Node ha-685475-m03 in Controller
	  Normal  RegisteredNode           3m56s                node-controller  Node ha-685475-m03 event: Registered Node ha-685475-m03 in Controller
	
	
	Name:               ha-685475-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T18_44_24_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:44:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:47:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:44:54 +0000   Tue, 24 Sep 2024 18:44:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:44:54 +0000   Tue, 24 Sep 2024 18:44:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:44:54 +0000   Tue, 24 Sep 2024 18:44:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:44:54 +0000   Tue, 24 Sep 2024 18:44:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    ha-685475-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5be0e3597a0f4236b1fa9e5e221d49dc
	  System UUID:                5be0e359-7a0f-4236-b1fa-9e5e221d49dc
	  Boot ID:                    076086b0-4e87-4ae6-8221-9f0322235896
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-n4nlv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m3s
	  kube-system                 kube-proxy-9m62z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m3s (x2 over 3m4s)  kubelet          Node ha-685475-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x2 over 3m4s)  kubelet          Node ha-685475-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x2 over 3m4s)  kubelet          Node ha-685475-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal  NodeReady                2m45s                kubelet          Node ha-685475-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep24 18:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047306] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036787] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.684392] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.705375] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.505519] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep24 18:41] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.056998] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056884] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.156659] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.148421] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.267579] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +3.782999] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +3.621822] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.062553] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.171108] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.082463] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.344664] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.133235] kauditd_printk_skb: 38 callbacks suppressed
	[Sep24 18:42] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707] <==
	{"level":"warn","ts":"2024-09-24T18:47:27.840024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.843256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.852923Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.858286Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.864482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.867425Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.870333Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.876505Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.883688Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.889286Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.893516Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.898130Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.926689Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.927625Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.951838Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.957030Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.960967Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.964187Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.967778Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.971038Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.974627Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.980562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:27.990055Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:28.022415Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:28.026329Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:47:28 up 6 min,  0 users,  load average: 0.04, 0.20, 0.11
	Linux ha-685475 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678] <==
	I0924 18:46:56.555675       1 main.go:299] handling current node
	I0924 18:47:06.561634       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:47:06.561692       1 main.go:299] handling current node
	I0924 18:47:06.561710       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:47:06.561715       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:47:06.561848       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:47:06.561866       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:47:06.561914       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:47:06.561931       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:47:16.564762       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:47:16.564893       1 main.go:299] handling current node
	I0924 18:47:16.564926       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:47:16.564945       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:47:16.565064       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:47:16.565119       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:47:16.565194       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:47:16.565212       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:47:26.555520       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:47:26.555700       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:47:26.555958       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:47:26.556011       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:47:26.556121       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:47:26.556157       1 main.go:299] handling current node
	I0924 18:47:26.556192       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:47:26.556214       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad] <==
	I0924 18:41:17.672745       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 18:41:17.723505       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0924 18:41:17.816990       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0924 18:41:17.823594       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.7]
	I0924 18:41:17.824633       1 controller.go:615] quota admission added evaluator for: endpoints
	I0924 18:41:17.829868       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 18:41:18.021888       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0924 18:41:19.286470       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0924 18:41:19.299197       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0924 18:41:19.310963       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0924 18:41:23.075217       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0924 18:41:23.423831       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0924 18:43:54.268115       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58120: use of closed network connection
	E0924 18:43:54.604143       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58158: use of closed network connection
	E0924 18:43:54.783115       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58164: use of closed network connection
	E0924 18:43:54.950893       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58168: use of closed network connection
	E0924 18:43:55.309336       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58194: use of closed network connection
	E0924 18:43:55.511247       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58214: use of closed network connection
	E0924 18:43:55.954224       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58254: use of closed network connection
	E0924 18:43:56.117109       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58266: use of closed network connection
	E0924 18:43:56.281611       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58282: use of closed network connection
	E0924 18:43:56.451342       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58292: use of closed network connection
	E0924 18:43:56.632767       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58308: use of closed network connection
	E0924 18:43:56.794004       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58330: use of closed network connection
	W0924 18:45:17.827671       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.7 192.168.39.84]
	
	
	==> kube-controller-manager [5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8] <==
	I0924 18:44:24.247180       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:24.247492       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:24.265765       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:24.436622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m03"
	I0924 18:44:24.498908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:24.871884       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:26.085940       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:27.805304       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:27.915596       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:27.967113       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:27.968167       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-685475-m04"
	I0924 18:44:28.400258       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:34.420054       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:42.456619       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-685475-m04"
	I0924 18:44:42.456667       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:42.471240       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:42.830571       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:54.874379       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:45:36.091506       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m02"
	I0924 18:45:36.091566       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-685475-m04"
	I0924 18:45:36.110189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m02"
	I0924 18:45:36.281556       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.99566ms"
	I0924 18:45:36.282660       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="101.243µs"
	I0924 18:45:38.045778       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m02"
	I0924 18:45:41.375346       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m02"
	
	
	==> kube-proxy [9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 18:41:25.700409       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 18:41:25.766662       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.7"]
	E0924 18:41:25.766911       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 18:41:25.811114       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 18:41:25.811144       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 18:41:25.811180       1 server_linux.go:169] "Using iptables Proxier"
	I0924 18:41:25.813724       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 18:41:25.814452       1 server.go:483] "Version info" version="v1.31.1"
	I0924 18:41:25.814533       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:41:25.818487       1 config.go:199] "Starting service config controller"
	I0924 18:41:25.819365       1 config.go:105] "Starting endpoint slice config controller"
	I0924 18:41:25.820408       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 18:41:25.820718       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 18:41:25.821642       1 config.go:328] "Starting node config controller"
	I0924 18:41:25.822952       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 18:41:25.921008       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 18:41:25.923339       1 shared_informer.go:320] Caches are synced for node config
	I0924 18:41:25.923395       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc] <==
	W0924 18:41:16.961127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 18:41:16.961178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:16.962189       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 18:41:16.962268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.047239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 18:41:17.047364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.102252       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0924 18:41:17.102364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.222048       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 18:41:17.222166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.230553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 18:41:17.231072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.384731       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 18:41:17.384781       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 18:41:17.385753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 18:41:17.385816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0924 18:41:20.277859       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0924 18:43:50.159728       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w6g8l\": pod busybox-7dff88458-w6g8l is already assigned to node \"ha-685475-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-w6g8l" node="ha-685475-m02"
	E0924 18:43:50.159906       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w6g8l\": pod busybox-7dff88458-w6g8l is already assigned to node \"ha-685475-m02\"" pod="default/busybox-7dff88458-w6g8l"
	E0924 18:43:50.160616       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hmkfk\": pod busybox-7dff88458-hmkfk is already assigned to node \"ha-685475\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hmkfk" node="ha-685475"
	E0924 18:43:50.160683       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hmkfk\": pod busybox-7dff88458-hmkfk is already assigned to node \"ha-685475\"" pod="default/busybox-7dff88458-hmkfk"
	E0924 18:44:24.296261       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9m62z\": pod kube-proxy-9m62z is already assigned to node \"ha-685475-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9m62z" node="ha-685475-m04"
	E0924 18:44:24.296334       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d172ae09-1eb7-4e5d-a5a1-e865b926b6eb(kube-system/kube-proxy-9m62z) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-9m62z"
	E0924 18:44:24.296350       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9m62z\": pod kube-proxy-9m62z is already assigned to node \"ha-685475-m04\"" pod="kube-system/kube-proxy-9m62z"
	I0924 18:44:24.296367       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9m62z" node="ha-685475-m04"
	
	
	==> kubelet <==
	Sep 24 18:46:09 ha-685475 kubelet[1306]: E0924 18:46:09.287410    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203569286673502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:19 ha-685475 kubelet[1306]: E0924 18:46:19.240421    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 18:46:19 ha-685475 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 18:46:19 ha-685475 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 18:46:19 ha-685475 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 18:46:19 ha-685475 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 18:46:19 ha-685475 kubelet[1306]: E0924 18:46:19.289533    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203579289109382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:19 ha-685475 kubelet[1306]: E0924 18:46:19.289568    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203579289109382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:29 ha-685475 kubelet[1306]: E0924 18:46:29.292185    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203589291965830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:29 ha-685475 kubelet[1306]: E0924 18:46:29.292494    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203589291965830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:39 ha-685475 kubelet[1306]: E0924 18:46:39.293680    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203599293434791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:39 ha-685475 kubelet[1306]: E0924 18:46:39.293717    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203599293434791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:49 ha-685475 kubelet[1306]: E0924 18:46:49.295059    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203609294682424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:49 ha-685475 kubelet[1306]: E0924 18:46:49.295397    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203609294682424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:59 ha-685475 kubelet[1306]: E0924 18:46:59.296553    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203619296254794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:59 ha-685475 kubelet[1306]: E0924 18:46:59.296987    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203619296254794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:47:09 ha-685475 kubelet[1306]: E0924 18:47:09.298543    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203629298152404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:47:09 ha-685475 kubelet[1306]: E0924 18:47:09.301982    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203629298152404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:47:19 ha-685475 kubelet[1306]: E0924 18:47:19.239486    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 18:47:19 ha-685475 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 18:47:19 ha-685475 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 18:47:19 ha-685475 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 18:47:19 ha-685475 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 18:47:19 ha-685475 kubelet[1306]: E0924 18:47:19.303369    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203639303146026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:47:19 ha-685475 kubelet[1306]: E0924 18:47:19.303405    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203639303146026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-685475 -n ha-685475
helpers_test.go:261: (dbg) Run:  kubectl --context ha-685475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr: (4.229523285s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-685475 -n ha-685475
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 logs -n 25
E0924 18:47:33.649558   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-685475 logs -n 25: (1.331405958s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475:/home/docker/cp-test_ha-685475-m03_ha-685475.txt                      |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475 sudo cat                                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /home/docker/cp-test_ha-685475-m03_ha-685475.txt                                |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m02:/home/docker/cp-test_ha-685475-m03_ha-685475-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m02 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /home/docker/cp-test_ha-685475-m03_ha-685475-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m04:/home/docker/cp-test_ha-685475-m03_ha-685475-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m04 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /home/docker/cp-test_ha-685475-m03_ha-685475-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-685475 cp testdata/cp-test.txt                                               | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile399016322/001/cp-test_ha-685475-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475:/home/docker/cp-test_ha-685475-m04_ha-685475.txt                      |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475 sudo cat                                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-685475-m04_ha-685475.txt                                |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m02:/home/docker/cp-test_ha-685475-m04_ha-685475-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m02 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-685475-m04_ha-685475-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m03:/home/docker/cp-test_ha-685475-m04_ha-685475-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m03 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-685475-m04_ha-685475-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-685475 node stop m02 -v=7                                                    | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-685475 node start m02 -v=7                                                   | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:47 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:40:35
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:40:35.618652   22837 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:40:35.618943   22837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:40:35.618954   22837 out.go:358] Setting ErrFile to fd 2...
	I0924 18:40:35.618959   22837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:40:35.619154   22837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 18:40:35.619730   22837 out.go:352] Setting JSON to false
	I0924 18:40:35.620645   22837 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1387,"bootTime":1727201849,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:40:35.620729   22837 start.go:139] virtualization: kvm guest
	I0924 18:40:35.622855   22837 out.go:177] * [ha-685475] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 18:40:35.624385   22837 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:40:35.624401   22837 notify.go:220] Checking for updates...
	I0924 18:40:35.627290   22837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:40:35.628609   22837 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:40:35.629977   22837 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:40:35.631349   22837 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 18:40:35.632638   22837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:40:35.634090   22837 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:40:35.670308   22837 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 18:40:35.671877   22837 start.go:297] selected driver: kvm2
	I0924 18:40:35.671905   22837 start.go:901] validating driver "kvm2" against <nil>
	I0924 18:40:35.671922   22837 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:40:35.672818   22837 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:40:35.672911   22837 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 18:40:35.688646   22837 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 18:40:35.688691   22837 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 18:40:35.688908   22837 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:40:35.688933   22837 cni.go:84] Creating CNI manager for ""
	I0924 18:40:35.688955   22837 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0924 18:40:35.688963   22837 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0924 18:40:35.689004   22837 start.go:340] cluster config:
	{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0924 18:40:35.689084   22837 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:40:35.691077   22837 out.go:177] * Starting "ha-685475" primary control-plane node in "ha-685475" cluster
	I0924 18:40:35.692675   22837 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:40:35.692727   22837 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 18:40:35.692737   22837 cache.go:56] Caching tarball of preloaded images
	I0924 18:40:35.692807   22837 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 18:40:35.692817   22837 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 18:40:35.693129   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:40:35.693148   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json: {Name:mkf04021428036cd37ddc8fca7772aaba780fa7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:40:35.693278   22837 start.go:360] acquireMachinesLock for ha-685475: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 18:40:35.693307   22837 start.go:364] duration metric: took 16.26µs to acquireMachinesLock for "ha-685475"
	I0924 18:40:35.693323   22837 start.go:93] Provisioning new machine with config: &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:40:35.693388   22837 start.go:125] createHost starting for "" (driver="kvm2")
	I0924 18:40:35.695217   22837 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 18:40:35.695377   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:40:35.695407   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:40:35.709830   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I0924 18:40:35.710273   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:40:35.710759   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:40:35.710782   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:40:35.711106   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:40:35.711266   22837 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:40:35.711382   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:40:35.711548   22837 start.go:159] libmachine.API.Create for "ha-685475" (driver="kvm2")
	I0924 18:40:35.711571   22837 client.go:168] LocalClient.Create starting
	I0924 18:40:35.711598   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem
	I0924 18:40:35.711635   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:40:35.711648   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:40:35.711694   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem
	I0924 18:40:35.711713   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:40:35.711724   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:40:35.711739   22837 main.go:141] libmachine: Running pre-create checks...
	I0924 18:40:35.711747   22837 main.go:141] libmachine: (ha-685475) Calling .PreCreateCheck
	I0924 18:40:35.712023   22837 main.go:141] libmachine: (ha-685475) Calling .GetConfigRaw
	I0924 18:40:35.712397   22837 main.go:141] libmachine: Creating machine...
	I0924 18:40:35.712411   22837 main.go:141] libmachine: (ha-685475) Calling .Create
	I0924 18:40:35.712547   22837 main.go:141] libmachine: (ha-685475) Creating KVM machine...
	I0924 18:40:35.713673   22837 main.go:141] libmachine: (ha-685475) DBG | found existing default KVM network
	I0924 18:40:35.714359   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:35.714247   22860 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000121a50}
	I0924 18:40:35.714400   22837 main.go:141] libmachine: (ha-685475) DBG | created network xml: 
	I0924 18:40:35.714421   22837 main.go:141] libmachine: (ha-685475) DBG | <network>
	I0924 18:40:35.714434   22837 main.go:141] libmachine: (ha-685475) DBG |   <name>mk-ha-685475</name>
	I0924 18:40:35.714443   22837 main.go:141] libmachine: (ha-685475) DBG |   <dns enable='no'/>
	I0924 18:40:35.714462   22837 main.go:141] libmachine: (ha-685475) DBG |   
	I0924 18:40:35.714493   22837 main.go:141] libmachine: (ha-685475) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0924 18:40:35.714508   22837 main.go:141] libmachine: (ha-685475) DBG |     <dhcp>
	I0924 18:40:35.714524   22837 main.go:141] libmachine: (ha-685475) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0924 18:40:35.714536   22837 main.go:141] libmachine: (ha-685475) DBG |     </dhcp>
	I0924 18:40:35.714545   22837 main.go:141] libmachine: (ha-685475) DBG |   </ip>
	I0924 18:40:35.714555   22837 main.go:141] libmachine: (ha-685475) DBG |   
	I0924 18:40:35.714563   22837 main.go:141] libmachine: (ha-685475) DBG | </network>
	I0924 18:40:35.714575   22837 main.go:141] libmachine: (ha-685475) DBG | 
	I0924 18:40:35.719712   22837 main.go:141] libmachine: (ha-685475) DBG | trying to create private KVM network mk-ha-685475 192.168.39.0/24...
	I0924 18:40:35.786088   22837 main.go:141] libmachine: (ha-685475) DBG | private KVM network mk-ha-685475 192.168.39.0/24 created
	I0924 18:40:35.786128   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:35.786012   22860 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:40:35.786138   22837 main.go:141] libmachine: (ha-685475) Setting up store path in /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475 ...
	I0924 18:40:35.786155   22837 main.go:141] libmachine: (ha-685475) Building disk image from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 18:40:35.786173   22837 main.go:141] libmachine: (ha-685475) Downloading /home/jenkins/minikube-integration/19700-3751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 18:40:36.040941   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:36.040806   22860 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa...
	I0924 18:40:36.268625   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:36.268496   22860 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/ha-685475.rawdisk...
	I0924 18:40:36.268672   22837 main.go:141] libmachine: (ha-685475) DBG | Writing magic tar header
	I0924 18:40:36.268724   22837 main.go:141] libmachine: (ha-685475) DBG | Writing SSH key tar header
	I0924 18:40:36.268756   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:36.268615   22860 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475 ...
	I0924 18:40:36.268769   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475 (perms=drwx------)
	I0924 18:40:36.268781   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines (perms=drwxr-xr-x)
	I0924 18:40:36.268787   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube (perms=drwxr-xr-x)
	I0924 18:40:36.268796   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751 (perms=drwxrwxr-x)
	I0924 18:40:36.268804   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 18:40:36.268835   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475
	I0924 18:40:36.268855   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines
	I0924 18:40:36.268865   22837 main.go:141] libmachine: (ha-685475) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 18:40:36.268883   22837 main.go:141] libmachine: (ha-685475) Creating domain...
	I0924 18:40:36.268895   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:40:36.268900   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751
	I0924 18:40:36.268908   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 18:40:36.268917   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home/jenkins
	I0924 18:40:36.268929   22837 main.go:141] libmachine: (ha-685475) DBG | Checking permissions on dir: /home
	I0924 18:40:36.268937   22837 main.go:141] libmachine: (ha-685475) DBG | Skipping /home - not owner
	I0924 18:40:36.269970   22837 main.go:141] libmachine: (ha-685475) define libvirt domain using xml: 
	I0924 18:40:36.270004   22837 main.go:141] libmachine: (ha-685475) <domain type='kvm'>
	I0924 18:40:36.270014   22837 main.go:141] libmachine: (ha-685475)   <name>ha-685475</name>
	I0924 18:40:36.270022   22837 main.go:141] libmachine: (ha-685475)   <memory unit='MiB'>2200</memory>
	I0924 18:40:36.270031   22837 main.go:141] libmachine: (ha-685475)   <vcpu>2</vcpu>
	I0924 18:40:36.270041   22837 main.go:141] libmachine: (ha-685475)   <features>
	I0924 18:40:36.270049   22837 main.go:141] libmachine: (ha-685475)     <acpi/>
	I0924 18:40:36.270059   22837 main.go:141] libmachine: (ha-685475)     <apic/>
	I0924 18:40:36.270084   22837 main.go:141] libmachine: (ha-685475)     <pae/>
	I0924 18:40:36.270105   22837 main.go:141] libmachine: (ha-685475)     
	I0924 18:40:36.270115   22837 main.go:141] libmachine: (ha-685475)   </features>
	I0924 18:40:36.270123   22837 main.go:141] libmachine: (ha-685475)   <cpu mode='host-passthrough'>
	I0924 18:40:36.270131   22837 main.go:141] libmachine: (ha-685475)   
	I0924 18:40:36.270135   22837 main.go:141] libmachine: (ha-685475)   </cpu>
	I0924 18:40:36.270139   22837 main.go:141] libmachine: (ha-685475)   <os>
	I0924 18:40:36.270143   22837 main.go:141] libmachine: (ha-685475)     <type>hvm</type>
	I0924 18:40:36.270148   22837 main.go:141] libmachine: (ha-685475)     <boot dev='cdrom'/>
	I0924 18:40:36.270152   22837 main.go:141] libmachine: (ha-685475)     <boot dev='hd'/>
	I0924 18:40:36.270157   22837 main.go:141] libmachine: (ha-685475)     <bootmenu enable='no'/>
	I0924 18:40:36.270162   22837 main.go:141] libmachine: (ha-685475)   </os>
	I0924 18:40:36.270168   22837 main.go:141] libmachine: (ha-685475)   <devices>
	I0924 18:40:36.270179   22837 main.go:141] libmachine: (ha-685475)     <disk type='file' device='cdrom'>
	I0924 18:40:36.270191   22837 main.go:141] libmachine: (ha-685475)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/boot2docker.iso'/>
	I0924 18:40:36.270215   22837 main.go:141] libmachine: (ha-685475)       <target dev='hdc' bus='scsi'/>
	I0924 18:40:36.270223   22837 main.go:141] libmachine: (ha-685475)       <readonly/>
	I0924 18:40:36.270227   22837 main.go:141] libmachine: (ha-685475)     </disk>
	I0924 18:40:36.270232   22837 main.go:141] libmachine: (ha-685475)     <disk type='file' device='disk'>
	I0924 18:40:36.270240   22837 main.go:141] libmachine: (ha-685475)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 18:40:36.270255   22837 main.go:141] libmachine: (ha-685475)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/ha-685475.rawdisk'/>
	I0924 18:40:36.270268   22837 main.go:141] libmachine: (ha-685475)       <target dev='hda' bus='virtio'/>
	I0924 18:40:36.270285   22837 main.go:141] libmachine: (ha-685475)     </disk>
	I0924 18:40:36.270298   22837 main.go:141] libmachine: (ha-685475)     <interface type='network'>
	I0924 18:40:36.270315   22837 main.go:141] libmachine: (ha-685475)       <source network='mk-ha-685475'/>
	I0924 18:40:36.270332   22837 main.go:141] libmachine: (ha-685475)       <model type='virtio'/>
	I0924 18:40:36.270343   22837 main.go:141] libmachine: (ha-685475)     </interface>
	I0924 18:40:36.270354   22837 main.go:141] libmachine: (ha-685475)     <interface type='network'>
	I0924 18:40:36.270365   22837 main.go:141] libmachine: (ha-685475)       <source network='default'/>
	I0924 18:40:36.270375   22837 main.go:141] libmachine: (ha-685475)       <model type='virtio'/>
	I0924 18:40:36.270384   22837 main.go:141] libmachine: (ha-685475)     </interface>
	I0924 18:40:36.270394   22837 main.go:141] libmachine: (ha-685475)     <serial type='pty'>
	I0924 18:40:36.270402   22837 main.go:141] libmachine: (ha-685475)       <target port='0'/>
	I0924 18:40:36.270412   22837 main.go:141] libmachine: (ha-685475)     </serial>
	I0924 18:40:36.270421   22837 main.go:141] libmachine: (ha-685475)     <console type='pty'>
	I0924 18:40:36.270430   22837 main.go:141] libmachine: (ha-685475)       <target type='serial' port='0'/>
	I0924 18:40:36.270438   22837 main.go:141] libmachine: (ha-685475)     </console>
	I0924 18:40:36.270445   22837 main.go:141] libmachine: (ha-685475)     <rng model='virtio'>
	I0924 18:40:36.270455   22837 main.go:141] libmachine: (ha-685475)       <backend model='random'>/dev/random</backend>
	I0924 18:40:36.270471   22837 main.go:141] libmachine: (ha-685475)     </rng>
	I0924 18:40:36.270484   22837 main.go:141] libmachine: (ha-685475)     
	I0924 18:40:36.270496   22837 main.go:141] libmachine: (ha-685475)     
	I0924 18:40:36.270507   22837 main.go:141] libmachine: (ha-685475)   </devices>
	I0924 18:40:36.270515   22837 main.go:141] libmachine: (ha-685475) </domain>
	I0924 18:40:36.270524   22837 main.go:141] libmachine: (ha-685475) 
	I0924 18:40:36.274620   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:29:bb:c5 in network default
	I0924 18:40:36.275145   22837 main.go:141] libmachine: (ha-685475) Ensuring networks are active...
	I0924 18:40:36.275164   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:36.275867   22837 main.go:141] libmachine: (ha-685475) Ensuring network default is active
	I0924 18:40:36.276239   22837 main.go:141] libmachine: (ha-685475) Ensuring network mk-ha-685475 is active
	I0924 18:40:36.276892   22837 main.go:141] libmachine: (ha-685475) Getting domain xml...
	I0924 18:40:36.277603   22837 main.go:141] libmachine: (ha-685475) Creating domain...
	I0924 18:40:37.460480   22837 main.go:141] libmachine: (ha-685475) Waiting to get IP...
	I0924 18:40:37.461314   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:37.461739   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:37.461774   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:37.461717   22860 retry.go:31] will retry after 296.388363ms: waiting for machine to come up
	I0924 18:40:37.760304   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:37.760785   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:37.760810   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:37.760740   22860 retry.go:31] will retry after 328.765263ms: waiting for machine to come up
	I0924 18:40:38.091364   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:38.091840   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:38.091866   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:38.091794   22860 retry.go:31] will retry after 475.786926ms: waiting for machine to come up
	I0924 18:40:38.569463   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:38.569893   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:38.569921   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:38.569836   22860 retry.go:31] will retry after 449.224473ms: waiting for machine to come up
	I0924 18:40:39.020465   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:39.020861   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:39.020885   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:39.020825   22860 retry.go:31] will retry after 573.37705ms: waiting for machine to come up
	I0924 18:40:39.595466   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:39.595901   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:39.595920   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:39.595866   22860 retry.go:31] will retry after 888.819714ms: waiting for machine to come up
	I0924 18:40:40.485857   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:40.486194   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:40.486220   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:40.486169   22860 retry.go:31] will retry after 849.565748ms: waiting for machine to come up
	I0924 18:40:41.336920   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:41.337334   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:41.337355   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:41.337299   22860 retry.go:31] will retry after 943.088304ms: waiting for machine to come up
	I0924 18:40:42.282339   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:42.282747   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:42.282769   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:42.282704   22860 retry.go:31] will retry after 1.602523393s: waiting for machine to come up
	I0924 18:40:43.887465   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:43.887909   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:43.887926   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:43.887863   22860 retry.go:31] will retry after 1.565249639s: waiting for machine to come up
	I0924 18:40:45.455849   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:45.456357   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:45.456383   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:45.456304   22860 retry.go:31] will retry after 2.532618475s: waiting for machine to come up
	I0924 18:40:47.991803   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:47.992180   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:47.992208   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:47.992135   22860 retry.go:31] will retry after 2.721738632s: waiting for machine to come up
	I0924 18:40:50.715276   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:50.715664   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:50.715696   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:50.715634   22860 retry.go:31] will retry after 2.97095557s: waiting for machine to come up
	I0924 18:40:53.689583   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:53.689985   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find current IP address of domain ha-685475 in network mk-ha-685475
	I0924 18:40:53.690027   22837 main.go:141] libmachine: (ha-685475) DBG | I0924 18:40:53.689963   22860 retry.go:31] will retry after 4.964736548s: waiting for machine to come up
	I0924 18:40:58.657846   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:58.658217   22837 main.go:141] libmachine: (ha-685475) Found IP for machine: 192.168.39.7
	I0924 18:40:58.658231   22837 main.go:141] libmachine: (ha-685475) Reserving static IP address...
	I0924 18:40:58.658245   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has current primary IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:58.658686   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find host DHCP lease matching {name: "ha-685475", mac: "52:54:00:bb:26:52", ip: "192.168.39.7"} in network mk-ha-685475
	I0924 18:40:58.726895   22837 main.go:141] libmachine: (ha-685475) DBG | Getting to WaitForSSH function...
	I0924 18:40:58.726926   22837 main.go:141] libmachine: (ha-685475) Reserved static IP address: 192.168.39.7
	I0924 18:40:58.726937   22837 main.go:141] libmachine: (ha-685475) Waiting for SSH to be available...
	I0924 18:40:58.729433   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:40:58.729749   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475
	I0924 18:40:58.729778   22837 main.go:141] libmachine: (ha-685475) DBG | unable to find defined IP address of network mk-ha-685475 interface with MAC address 52:54:00:bb:26:52
	I0924 18:40:58.729916   22837 main.go:141] libmachine: (ha-685475) DBG | Using SSH client type: external
	I0924 18:40:58.729941   22837 main.go:141] libmachine: (ha-685475) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa (-rw-------)
	I0924 18:40:58.729969   22837 main.go:141] libmachine: (ha-685475) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:40:58.729980   22837 main.go:141] libmachine: (ha-685475) DBG | About to run SSH command:
	I0924 18:40:58.729993   22837 main.go:141] libmachine: (ha-685475) DBG | exit 0
	I0924 18:40:58.733379   22837 main.go:141] libmachine: (ha-685475) DBG | SSH cmd err, output: exit status 255: 
	I0924 18:40:58.733402   22837 main.go:141] libmachine: (ha-685475) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0924 18:40:58.733413   22837 main.go:141] libmachine: (ha-685475) DBG | command : exit 0
	I0924 18:40:58.733422   22837 main.go:141] libmachine: (ha-685475) DBG | err     : exit status 255
	I0924 18:40:58.733432   22837 main.go:141] libmachine: (ha-685475) DBG | output  : 
	I0924 18:41:01.734078   22837 main.go:141] libmachine: (ha-685475) DBG | Getting to WaitForSSH function...
	I0924 18:41:01.736442   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.736846   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:01.736875   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.736966   22837 main.go:141] libmachine: (ha-685475) DBG | Using SSH client type: external
	I0924 18:41:01.736988   22837 main.go:141] libmachine: (ha-685475) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa (-rw-------)
	I0924 18:41:01.737029   22837 main.go:141] libmachine: (ha-685475) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:41:01.737052   22837 main.go:141] libmachine: (ha-685475) DBG | About to run SSH command:
	I0924 18:41:01.737065   22837 main.go:141] libmachine: (ha-685475) DBG | exit 0
	I0924 18:41:01.858518   22837 main.go:141] libmachine: (ha-685475) DBG | SSH cmd err, output: <nil>: 
	I0924 18:41:01.858812   22837 main.go:141] libmachine: (ha-685475) KVM machine creation complete!
	I0924 18:41:01.859085   22837 main.go:141] libmachine: (ha-685475) Calling .GetConfigRaw
	I0924 18:41:01.859647   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:01.859818   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:01.859970   22837 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 18:41:01.859985   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:41:01.861184   22837 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 18:41:01.861196   22837 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 18:41:01.861201   22837 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 18:41:01.861206   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:01.863734   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.864111   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:01.864137   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.864287   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:01.864470   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:01.864641   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:01.864792   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:01.864958   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:01.865168   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:01.865180   22837 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 18:41:01.965971   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:41:01.965992   22837 main.go:141] libmachine: Detecting the provisioner...
	I0924 18:41:01.965999   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:01.968393   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.968679   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:01.968705   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:01.968849   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:01.968989   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:01.969127   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:01.969226   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:01.969360   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:01.969511   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:01.969521   22837 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 18:41:02.070902   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 18:41:02.070990   22837 main.go:141] libmachine: found compatible host: buildroot
	I0924 18:41:02.071004   22837 main.go:141] libmachine: Provisioning with buildroot...
	I0924 18:41:02.071015   22837 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:41:02.071246   22837 buildroot.go:166] provisioning hostname "ha-685475"
	I0924 18:41:02.071275   22837 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:41:02.071415   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.074599   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.074996   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.075019   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.075149   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:02.075311   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.075419   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.075520   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:02.075644   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:02.075797   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:02.075808   22837 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-685475 && echo "ha-685475" | sudo tee /etc/hostname
	I0924 18:41:02.191183   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685475
	
	I0924 18:41:02.191206   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.193903   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.194254   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.194277   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.194435   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:02.194612   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.194742   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.194863   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:02.195018   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:02.195214   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:02.195234   22837 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-685475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-685475/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-685475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 18:41:02.306707   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:41:02.306732   22837 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 18:41:02.306752   22837 buildroot.go:174] setting up certificates
	I0924 18:41:02.306763   22837 provision.go:84] configureAuth start
	I0924 18:41:02.306771   22837 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:41:02.307067   22837 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:41:02.309510   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.309793   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.309820   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.309932   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.311757   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.312020   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.312040   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.312160   22837 provision.go:143] copyHostCerts
	I0924 18:41:02.312182   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:41:02.312213   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 18:41:02.312221   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:41:02.312284   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 18:41:02.312357   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:41:02.312374   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 18:41:02.312380   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:41:02.312403   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 18:41:02.312444   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:41:02.312461   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 18:41:02.312467   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:41:02.312487   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 18:41:02.312532   22837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.ha-685475 san=[127.0.0.1 192.168.39.7 ha-685475 localhost minikube]
	I0924 18:41:02.610752   22837 provision.go:177] copyRemoteCerts
	I0924 18:41:02.610810   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 18:41:02.610847   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.613269   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.613544   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.613580   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.613691   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:02.613856   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.614031   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:02.614140   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:02.696690   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 18:41:02.696775   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0924 18:41:02.719028   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 18:41:02.719087   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 18:41:02.740811   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 18:41:02.740889   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 18:41:02.762904   22837 provision.go:87] duration metric: took 456.128009ms to configureAuth
	I0924 18:41:02.762937   22837 buildroot.go:189] setting minikube options for container-runtime
	I0924 18:41:02.763113   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:41:02.763199   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.765836   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.766227   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.766253   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.766382   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:02.766616   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.766752   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.766881   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:02.767012   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:02.767181   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:02.767201   22837 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 18:41:02.983298   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 18:41:02.983327   22837 main.go:141] libmachine: Checking connection to Docker...
	I0924 18:41:02.983336   22837 main.go:141] libmachine: (ha-685475) Calling .GetURL
	I0924 18:41:02.984661   22837 main.go:141] libmachine: (ha-685475) DBG | Using libvirt version 6000000
	I0924 18:41:02.986674   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.986998   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.987035   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.987171   22837 main.go:141] libmachine: Docker is up and running!
	I0924 18:41:02.987184   22837 main.go:141] libmachine: Reticulating splines...
	I0924 18:41:02.987191   22837 client.go:171] duration metric: took 27.275613308s to LocalClient.Create
	I0924 18:41:02.987217   22837 start.go:167] duration metric: took 27.275670931s to libmachine.API.Create "ha-685475"
	I0924 18:41:02.987229   22837 start.go:293] postStartSetup for "ha-685475" (driver="kvm2")
	I0924 18:41:02.987244   22837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:41:02.987264   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:02.987513   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:41:02.987534   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:02.989371   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.989734   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:02.989749   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:02.989938   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:02.990114   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:02.990358   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:02.990533   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:03.072587   22837 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 18:41:03.076584   22837 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 18:41:03.076617   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 18:41:03.076688   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 18:41:03.076760   22837 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 18:41:03.076772   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /etc/ssl/certs/109492.pem
	I0924 18:41:03.076869   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 18:41:03.085953   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:41:03.108631   22837 start.go:296] duration metric: took 121.38524ms for postStartSetup
	I0924 18:41:03.108689   22837 main.go:141] libmachine: (ha-685475) Calling .GetConfigRaw
	I0924 18:41:03.109239   22837 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:41:03.111776   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.112078   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:03.112107   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.112319   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:41:03.112501   22837 start.go:128] duration metric: took 27.419103166s to createHost
	I0924 18:41:03.112522   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:03.114886   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.115236   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:03.115261   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.115422   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:03.115597   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:03.115736   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:03.115880   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:03.116026   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:03.116220   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:41:03.116230   22837 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 18:41:03.223401   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727203263.206629374
	
	I0924 18:41:03.223425   22837 fix.go:216] guest clock: 1727203263.206629374
	I0924 18:41:03.223432   22837 fix.go:229] Guest: 2024-09-24 18:41:03.206629374 +0000 UTC Remote: 2024-09-24 18:41:03.112512755 +0000 UTC m=+27.526898013 (delta=94.116619ms)
	I0924 18:41:03.223470   22837 fix.go:200] guest clock delta is within tolerance: 94.116619ms
	I0924 18:41:03.223475   22837 start.go:83] releasing machines lock for "ha-685475", held for 27.53015951s
	I0924 18:41:03.223493   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:03.223794   22837 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:41:03.226346   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.226711   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:03.226738   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.226887   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:03.227337   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:03.227484   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:03.227576   22837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 18:41:03.227627   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:03.227700   22837 ssh_runner.go:195] Run: cat /version.json
	I0924 18:41:03.227725   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:03.230122   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.230442   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:03.230467   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.230533   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.230587   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:03.230756   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:03.230907   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:03.230941   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:03.230962   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:03.231017   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:03.231113   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:03.231229   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:03.231324   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:03.231424   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:03.307645   22837 ssh_runner.go:195] Run: systemctl --version
	I0924 18:41:03.331733   22837 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 18:41:03.485763   22837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 18:41:03.491914   22837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 18:41:03.491985   22837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:41:03.507429   22837 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 18:41:03.507461   22837 start.go:495] detecting cgroup driver to use...
	I0924 18:41:03.507517   22837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 18:41:03.523186   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 18:41:03.536999   22837 docker.go:217] disabling cri-docker service (if available) ...
	I0924 18:41:03.537069   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 18:41:03.550683   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 18:41:03.564455   22837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 18:41:03.675808   22837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 18:41:03.815291   22837 docker.go:233] disabling docker service ...
	I0924 18:41:03.815369   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 18:41:03.829457   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 18:41:03.842075   22837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 18:41:03.968977   22837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 18:41:04.100834   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 18:41:04.114151   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:41:04.131432   22837 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 18:41:04.131492   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.141141   22837 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 18:41:04.141212   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.150778   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.160259   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.169851   22837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:41:04.179488   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.189760   22837 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.206045   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:04.215615   22837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:41:04.224420   22837 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 18:41:04.224481   22837 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 18:41:04.237154   22837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:41:04.245941   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:41:04.372069   22837 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 18:41:04.462010   22837 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 18:41:04.462086   22837 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 18:41:04.466695   22837 start.go:563] Will wait 60s for crictl version
	I0924 18:41:04.466753   22837 ssh_runner.go:195] Run: which crictl
	I0924 18:41:04.470287   22837 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 18:41:04.509294   22837 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 18:41:04.509389   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:41:04.538739   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:41:04.567366   22837 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 18:41:04.568751   22837 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:41:04.571725   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:04.572167   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:04.572191   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:04.572415   22837 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 18:41:04.576247   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:41:04.588081   22837 kubeadm.go:883] updating cluster {Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 18:41:04.588171   22837 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:41:04.588210   22837 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:41:04.618331   22837 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 18:41:04.618391   22837 ssh_runner.go:195] Run: which lz4
	I0924 18:41:04.622176   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0924 18:41:04.622306   22837 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 18:41:04.626507   22837 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 18:41:04.626538   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 18:41:05.822721   22837 crio.go:462] duration metric: took 1.200469004s to copy over tarball
	I0924 18:41:05.822802   22837 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 18:41:07.793883   22837 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.971051538s)
	I0924 18:41:07.793914   22837 crio.go:469] duration metric: took 1.971161974s to extract the tarball
	I0924 18:41:07.793928   22837 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 18:41:07.830067   22837 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:41:07.873646   22837 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 18:41:07.873666   22837 cache_images.go:84] Images are preloaded, skipping loading
	I0924 18:41:07.873673   22837 kubeadm.go:934] updating node { 192.168.39.7 8443 v1.31.1 crio true true} ...
	I0924 18:41:07.873776   22837 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-685475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 18:41:07.873869   22837 ssh_runner.go:195] Run: crio config
	I0924 18:41:07.919600   22837 cni.go:84] Creating CNI manager for ""
	I0924 18:41:07.919618   22837 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0924 18:41:07.919627   22837 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 18:41:07.919646   22837 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-685475 NodeName:ha-685475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 18:41:07.919771   22837 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-685475"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 18:41:07.919801   22837 kube-vip.go:115] generating kube-vip config ...
	I0924 18:41:07.919842   22837 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 18:41:07.935217   22837 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 18:41:07.935310   22837 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0924 18:41:07.935358   22837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:41:07.945016   22837 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 18:41:07.945087   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0924 18:41:07.954390   22837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0924 18:41:07.970734   22837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:41:07.986979   22837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0924 18:41:08.003862   22837 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0924 18:41:08.020369   22837 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 18:41:08.024317   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:41:08.036613   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:41:08.156453   22837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:41:08.174003   22837 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475 for IP: 192.168.39.7
	I0924 18:41:08.174027   22837 certs.go:194] generating shared ca certs ...
	I0924 18:41:08.174053   22837 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.174225   22837 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 18:41:08.174336   22837 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 18:41:08.174354   22837 certs.go:256] generating profile certs ...
	I0924 18:41:08.174424   22837 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key
	I0924 18:41:08.174441   22837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt with IP's: []
	I0924 18:41:08.287248   22837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt ...
	I0924 18:41:08.287273   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt: {Name:mkaceb17faeee44eeb1f13a92453dd9237d1455b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.287463   22837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key ...
	I0924 18:41:08.287478   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key: {Name:mkbd762d73e102d20739c242c4dc875214afceba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.287585   22837 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.2dedd2ac
	I0924 18:41:08.287601   22837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.2dedd2ac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.254]
	I0924 18:41:08.420508   22837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.2dedd2ac ...
	I0924 18:41:08.420553   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.2dedd2ac: {Name:mk9b48c67c74aab074e9cdcef91880f465361f38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.420805   22837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.2dedd2ac ...
	I0924 18:41:08.420830   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.2dedd2ac: {Name:mk62b56ebe2e46561c15a5b3088127454fecceb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.420950   22837 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.2dedd2ac -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt
	I0924 18:41:08.421025   22837 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.2dedd2ac -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key
	I0924 18:41:08.421075   22837 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key
	I0924 18:41:08.421093   22837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt with IP's: []
	I0924 18:41:08.543472   22837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt ...
	I0924 18:41:08.543508   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt: {Name:mk21cf6990553b97f2812e699190b5a379943f0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.543691   22837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key ...
	I0924 18:41:08.543706   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key: {Name:mk47726c7ba1340c780d325e14f433f9d0586f15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:08.543805   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 18:41:08.543829   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 18:41:08.543844   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 18:41:08.543860   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 18:41:08.543879   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 18:41:08.543898   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 18:41:08.543917   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 18:41:08.543935   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 18:41:08.543997   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 18:41:08.544044   22837 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 18:41:08.544059   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 18:41:08.544094   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 18:41:08.544127   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:41:08.544158   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 18:41:08.544210   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:41:08.544249   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem -> /usr/share/ca-certificates/10949.pem
	I0924 18:41:08.544270   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /usr/share/ca-certificates/109492.pem
	I0924 18:41:08.544289   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:08.544858   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:41:08.570597   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 18:41:08.594223   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:41:08.617808   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 18:41:08.641632   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 18:41:08.665659   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 18:41:08.689661   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:41:08.713308   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 18:41:08.737197   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 18:41:08.762148   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 18:41:08.788186   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:41:08.813589   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 18:41:08.831743   22837 ssh_runner.go:195] Run: openssl version
	I0924 18:41:08.837364   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 18:41:08.849428   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 18:41:08.854475   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 18:41:08.854538   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 18:41:08.860154   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 18:41:08.871267   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:41:08.882296   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:08.886561   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:08.886625   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:08.892075   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:41:08.902853   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 18:41:08.913706   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 18:41:08.917998   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 18:41:08.918060   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 18:41:08.923875   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 18:41:08.937683   22837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:41:08.942083   22837 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 18:41:08.942144   22837 kubeadm.go:392] StartCluster: {Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:41:08.942205   22837 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 18:41:08.942246   22837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 18:41:08.996144   22837 cri.go:89] found id: ""
	I0924 18:41:08.996211   22837 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 18:41:09.006172   22837 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 18:41:09.015736   22837 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 18:41:09.025439   22837 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 18:41:09.025460   22837 kubeadm.go:157] found existing configuration files:
	
	I0924 18:41:09.025508   22837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 18:41:09.034746   22837 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 18:41:09.034800   22837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 18:41:09.044191   22837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 18:41:09.053192   22837 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 18:41:09.053253   22837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 18:41:09.062560   22837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 18:41:09.071543   22837 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 18:41:09.071616   22837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 18:41:09.080990   22837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 18:41:09.089937   22837 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 18:41:09.090011   22837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 18:41:09.099338   22837 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 18:41:09.200102   22837 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 18:41:09.200206   22837 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 18:41:09.288288   22837 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 18:41:09.288440   22837 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 18:41:09.288580   22837 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 18:41:09.299649   22837 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 18:41:09.414648   22837 out.go:235]   - Generating certificates and keys ...
	I0924 18:41:09.414792   22837 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 18:41:09.414929   22837 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 18:41:09.453019   22837 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 18:41:09.665252   22837 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 18:41:09.786773   22837 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 18:41:09.895285   22837 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 18:41:10.253463   22837 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 18:41:10.253620   22837 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-685475 localhost] and IPs [192.168.39.7 127.0.0.1 ::1]
	I0924 18:41:10.418238   22837 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 18:41:10.418481   22837 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-685475 localhost] and IPs [192.168.39.7 127.0.0.1 ::1]
	I0924 18:41:10.573281   22837 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 18:41:10.657693   22837 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 18:41:10.807528   22837 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 18:41:10.807638   22837 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 18:41:10.929209   22837 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 18:41:11.169941   22837 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 18:41:11.264501   22837 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 18:41:11.399230   22837 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 18:41:11.616228   22837 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 18:41:11.616627   22837 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 18:41:11.619943   22837 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 18:41:11.621650   22837 out.go:235]   - Booting up control plane ...
	I0924 18:41:11.621746   22837 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 18:41:11.621863   22837 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 18:41:11.621965   22837 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 18:41:11.642334   22837 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 18:41:11.648424   22837 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 18:41:11.648483   22837 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 18:41:11.789428   22837 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 18:41:11.789563   22837 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 18:41:12.790634   22837 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001755257s
	I0924 18:41:12.790735   22837 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 18:41:18.478058   22837 kubeadm.go:310] [api-check] The API server is healthy after 5.68964956s
	I0924 18:41:18.493860   22837 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 18:41:18.510122   22837 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 18:41:18.541786   22837 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 18:41:18.541987   22837 kubeadm.go:310] [mark-control-plane] Marking the node ha-685475 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 18:41:18.554344   22837 kubeadm.go:310] [bootstrap-token] Using token: 7i3lxo.hk68lojtv0dswhd7
	I0924 18:41:18.555710   22837 out.go:235]   - Configuring RBAC rules ...
	I0924 18:41:18.555857   22837 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 18:41:18.562776   22837 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 18:41:18.572835   22837 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 18:41:18.581420   22837 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 18:41:18.584989   22837 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 18:41:18.590727   22837 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 18:41:18.886783   22837 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 18:41:19.308273   22837 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 18:41:19.885351   22837 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 18:41:19.886864   22837 kubeadm.go:310] 
	I0924 18:41:19.886947   22837 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 18:41:19.886955   22837 kubeadm.go:310] 
	I0924 18:41:19.887084   22837 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 18:41:19.887110   22837 kubeadm.go:310] 
	I0924 18:41:19.887149   22837 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 18:41:19.887252   22837 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 18:41:19.887307   22837 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 18:41:19.887317   22837 kubeadm.go:310] 
	I0924 18:41:19.887400   22837 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 18:41:19.887409   22837 kubeadm.go:310] 
	I0924 18:41:19.887475   22837 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 18:41:19.887492   22837 kubeadm.go:310] 
	I0924 18:41:19.887567   22837 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 18:41:19.887670   22837 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 18:41:19.887778   22837 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 18:41:19.887818   22837 kubeadm.go:310] 
	I0924 18:41:19.887934   22837 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 18:41:19.888013   22837 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 18:41:19.888020   22837 kubeadm.go:310] 
	I0924 18:41:19.888111   22837 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7i3lxo.hk68lojtv0dswhd7 \
	I0924 18:41:19.888252   22837 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 18:41:19.888288   22837 kubeadm.go:310] 	--control-plane 
	I0924 18:41:19.888296   22837 kubeadm.go:310] 
	I0924 18:41:19.888373   22837 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 18:41:19.888384   22837 kubeadm.go:310] 
	I0924 18:41:19.888452   22837 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7i3lxo.hk68lojtv0dswhd7 \
	I0924 18:41:19.888539   22837 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 18:41:19.889407   22837 kubeadm.go:310] W0924 18:41:09.185692     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:41:19.889718   22837 kubeadm.go:310] W0924 18:41:09.186387     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:41:19.889856   22837 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 18:41:19.889883   22837 cni.go:84] Creating CNI manager for ""
	I0924 18:41:19.889890   22837 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0924 18:41:19.892313   22837 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0924 18:41:19.893563   22837 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0924 18:41:19.898820   22837 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0924 18:41:19.898856   22837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0924 18:41:19.916356   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0924 18:41:20.290022   22837 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 18:41:20.290096   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:20.290149   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-685475 minikube.k8s.io/updated_at=2024_09_24T18_41_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=ha-685475 minikube.k8s.io/primary=true
	I0924 18:41:20.340090   22837 ops.go:34] apiserver oom_adj: -16
	I0924 18:41:20.448075   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:20.948257   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:21.448755   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:21.948360   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:22.448489   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:22.948535   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:41:23.038503   22837 kubeadm.go:1113] duration metric: took 2.748466322s to wait for elevateKubeSystemPrivileges
	I0924 18:41:23.038543   22837 kubeadm.go:394] duration metric: took 14.096402684s to StartCluster
	I0924 18:41:23.038566   22837 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:23.038649   22837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:41:23.039313   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:23.039501   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0924 18:41:23.039502   22837 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:41:23.039576   22837 start.go:241] waiting for startup goroutines ...
	I0924 18:41:23.039526   22837 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 18:41:23.039598   22837 addons.go:69] Setting storage-provisioner=true in profile "ha-685475"
	I0924 18:41:23.039615   22837 addons.go:234] Setting addon storage-provisioner=true in "ha-685475"
	I0924 18:41:23.039616   22837 addons.go:69] Setting default-storageclass=true in profile "ha-685475"
	I0924 18:41:23.039640   22837 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-685475"
	I0924 18:41:23.039645   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:41:23.039696   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:41:23.040106   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.040124   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.040143   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.040155   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.054906   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41319
	I0924 18:41:23.055238   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35097
	I0924 18:41:23.055452   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.055608   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.055957   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.055986   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.056221   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.056245   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.056263   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.056409   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:41:23.056534   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.056961   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.056989   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.058582   22837 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:41:23.058812   22837 kapi.go:59] client config for ha-685475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key", CAFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 18:41:23.059257   22837 cert_rotation.go:140] Starting client certificate rotation controller
	I0924 18:41:23.059411   22837 addons.go:234] Setting addon default-storageclass=true in "ha-685475"
	I0924 18:41:23.059452   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:41:23.059725   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.059753   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.070908   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0924 18:41:23.071353   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.071899   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.071925   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.072270   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.072451   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:41:23.073858   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36557
	I0924 18:41:23.073870   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:23.074183   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.074573   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.074598   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.074991   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.075491   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.075531   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.075879   22837 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 18:41:23.077225   22837 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:41:23.077247   22837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 18:41:23.077265   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:23.079855   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:23.080215   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:23.080236   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:23.080425   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:23.080576   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:23.080722   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:23.080813   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:23.091212   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33839
	I0924 18:41:23.091717   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.092134   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.092151   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.092427   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.092615   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:41:23.094110   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:23.094306   22837 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 18:41:23.094320   22837 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 18:41:23.094337   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:23.097202   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:23.097634   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:23.097661   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:23.097807   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:23.097981   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:23.098125   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:23.098244   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:23.157451   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0924 18:41:23.219332   22837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:41:23.236503   22837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 18:41:23.513482   22837 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0924 18:41:23.780293   22837 main.go:141] libmachine: Making call to close driver server
	I0924 18:41:23.780320   22837 main.go:141] libmachine: (ha-685475) Calling .Close
	I0924 18:41:23.780368   22837 main.go:141] libmachine: Making call to close driver server
	I0924 18:41:23.780387   22837 main.go:141] libmachine: (ha-685475) Calling .Close
	I0924 18:41:23.780643   22837 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:41:23.780651   22837 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:41:23.780659   22837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:41:23.780662   22837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:41:23.780669   22837 main.go:141] libmachine: Making call to close driver server
	I0924 18:41:23.780671   22837 main.go:141] libmachine: Making call to close driver server
	I0924 18:41:23.780677   22837 main.go:141] libmachine: (ha-685475) Calling .Close
	I0924 18:41:23.780679   22837 main.go:141] libmachine: (ha-685475) Calling .Close
	I0924 18:41:23.780872   22837 main.go:141] libmachine: (ha-685475) DBG | Closing plugin on server side
	I0924 18:41:23.780906   22837 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:41:23.780911   22837 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:41:23.780919   22837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:41:23.780919   22837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:41:23.780967   22837 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0924 18:41:23.780985   22837 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0924 18:41:23.781073   22837 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0924 18:41:23.781083   22837 round_trippers.go:469] Request Headers:
	I0924 18:41:23.781093   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:41:23.781099   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:41:23.795500   22837 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0924 18:41:23.796218   22837 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0924 18:41:23.796237   22837 round_trippers.go:469] Request Headers:
	I0924 18:41:23.796248   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:41:23.796255   22837 round_trippers.go:473]     Content-Type: application/json
	I0924 18:41:23.796259   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:41:23.798194   22837 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0924 18:41:23.798350   22837 main.go:141] libmachine: Making call to close driver server
	I0924 18:41:23.798369   22837 main.go:141] libmachine: (ha-685475) Calling .Close
	I0924 18:41:23.798603   22837 main.go:141] libmachine: Successfully made call to close driver server
	I0924 18:41:23.798620   22837 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 18:41:23.800167   22837 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0924 18:41:23.801238   22837 addons.go:510] duration metric: took 761.715981ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0924 18:41:23.801274   22837 start.go:246] waiting for cluster config update ...
	I0924 18:41:23.801288   22837 start.go:255] writing updated cluster config ...
	I0924 18:41:23.802705   22837 out.go:201] 
	I0924 18:41:23.804213   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:41:23.804273   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:41:23.806007   22837 out.go:177] * Starting "ha-685475-m02" control-plane node in "ha-685475" cluster
	I0924 18:41:23.807501   22837 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:41:23.807522   22837 cache.go:56] Caching tarball of preloaded images
	I0924 18:41:23.807605   22837 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 18:41:23.807617   22837 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 18:41:23.807680   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:41:23.807853   22837 start.go:360] acquireMachinesLock for ha-685475-m02: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 18:41:23.807905   22837 start.go:364] duration metric: took 31.255µs to acquireMachinesLock for "ha-685475-m02"
	I0924 18:41:23.807922   22837 start.go:93] Provisioning new machine with config: &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:41:23.808020   22837 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0924 18:41:23.809639   22837 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 18:41:23.809702   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:23.809724   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:23.823910   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39425
	I0924 18:41:23.824393   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:23.824838   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:23.824857   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:23.825193   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:23.825352   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetMachineName
	I0924 18:41:23.825501   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:23.825615   22837 start.go:159] libmachine.API.Create for "ha-685475" (driver="kvm2")
	I0924 18:41:23.825634   22837 client.go:168] LocalClient.Create starting
	I0924 18:41:23.825657   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem
	I0924 18:41:23.825684   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:41:23.825697   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:41:23.825743   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem
	I0924 18:41:23.825761   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:41:23.825771   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:41:23.825785   22837 main.go:141] libmachine: Running pre-create checks...
	I0924 18:41:23.825792   22837 main.go:141] libmachine: (ha-685475-m02) Calling .PreCreateCheck
	I0924 18:41:23.825960   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetConfigRaw
	I0924 18:41:23.826338   22837 main.go:141] libmachine: Creating machine...
	I0924 18:41:23.826355   22837 main.go:141] libmachine: (ha-685475-m02) Calling .Create
	I0924 18:41:23.826493   22837 main.go:141] libmachine: (ha-685475-m02) Creating KVM machine...
	I0924 18:41:23.827625   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found existing default KVM network
	I0924 18:41:23.827759   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found existing private KVM network mk-ha-685475
	I0924 18:41:23.827871   22837 main.go:141] libmachine: (ha-685475-m02) Setting up store path in /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02 ...
	I0924 18:41:23.827888   22837 main.go:141] libmachine: (ha-685475-m02) Building disk image from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 18:41:23.827966   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:23.827870   23203 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:41:23.828041   22837 main.go:141] libmachine: (ha-685475-m02) Downloading /home/jenkins/minikube-integration/19700-3751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 18:41:24.081911   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:24.081766   23203 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa...
	I0924 18:41:24.287254   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:24.287116   23203 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/ha-685475-m02.rawdisk...
	I0924 18:41:24.287289   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Writing magic tar header
	I0924 18:41:24.287303   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Writing SSH key tar header
	I0924 18:41:24.287322   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:24.287234   23203 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02 ...
	I0924 18:41:24.287343   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02
	I0924 18:41:24.287363   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02 (perms=drwx------)
	I0924 18:41:24.287376   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines
	I0924 18:41:24.287386   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines (perms=drwxr-xr-x)
	I0924 18:41:24.287429   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:41:24.287454   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube (perms=drwxr-xr-x)
	I0924 18:41:24.287465   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751
	I0924 18:41:24.287486   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751 (perms=drwxrwxr-x)
	I0924 18:41:24.287508   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 18:41:24.287521   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 18:41:24.287531   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home/jenkins
	I0924 18:41:24.287541   22837 main.go:141] libmachine: (ha-685475-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 18:41:24.287551   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Checking permissions on dir: /home
	I0924 18:41:24.287560   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Skipping /home - not owner
	I0924 18:41:24.287570   22837 main.go:141] libmachine: (ha-685475-m02) Creating domain...
	I0924 18:41:24.288399   22837 main.go:141] libmachine: (ha-685475-m02) define libvirt domain using xml: 
	I0924 18:41:24.288421   22837 main.go:141] libmachine: (ha-685475-m02) <domain type='kvm'>
	I0924 18:41:24.288434   22837 main.go:141] libmachine: (ha-685475-m02)   <name>ha-685475-m02</name>
	I0924 18:41:24.288441   22837 main.go:141] libmachine: (ha-685475-m02)   <memory unit='MiB'>2200</memory>
	I0924 18:41:24.288467   22837 main.go:141] libmachine: (ha-685475-m02)   <vcpu>2</vcpu>
	I0924 18:41:24.288485   22837 main.go:141] libmachine: (ha-685475-m02)   <features>
	I0924 18:41:24.288491   22837 main.go:141] libmachine: (ha-685475-m02)     <acpi/>
	I0924 18:41:24.288498   22837 main.go:141] libmachine: (ha-685475-m02)     <apic/>
	I0924 18:41:24.288503   22837 main.go:141] libmachine: (ha-685475-m02)     <pae/>
	I0924 18:41:24.288510   22837 main.go:141] libmachine: (ha-685475-m02)     
	I0924 18:41:24.288517   22837 main.go:141] libmachine: (ha-685475-m02)   </features>
	I0924 18:41:24.288525   22837 main.go:141] libmachine: (ha-685475-m02)   <cpu mode='host-passthrough'>
	I0924 18:41:24.288550   22837 main.go:141] libmachine: (ha-685475-m02)   
	I0924 18:41:24.288565   22837 main.go:141] libmachine: (ha-685475-m02)   </cpu>
	I0924 18:41:24.288574   22837 main.go:141] libmachine: (ha-685475-m02)   <os>
	I0924 18:41:24.288586   22837 main.go:141] libmachine: (ha-685475-m02)     <type>hvm</type>
	I0924 18:41:24.288602   22837 main.go:141] libmachine: (ha-685475-m02)     <boot dev='cdrom'/>
	I0924 18:41:24.288616   22837 main.go:141] libmachine: (ha-685475-m02)     <boot dev='hd'/>
	I0924 18:41:24.288629   22837 main.go:141] libmachine: (ha-685475-m02)     <bootmenu enable='no'/>
	I0924 18:41:24.288636   22837 main.go:141] libmachine: (ha-685475-m02)   </os>
	I0924 18:41:24.288648   22837 main.go:141] libmachine: (ha-685475-m02)   <devices>
	I0924 18:41:24.288661   22837 main.go:141] libmachine: (ha-685475-m02)     <disk type='file' device='cdrom'>
	I0924 18:41:24.288679   22837 main.go:141] libmachine: (ha-685475-m02)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/boot2docker.iso'/>
	I0924 18:41:24.288689   22837 main.go:141] libmachine: (ha-685475-m02)       <target dev='hdc' bus='scsi'/>
	I0924 18:41:24.288695   22837 main.go:141] libmachine: (ha-685475-m02)       <readonly/>
	I0924 18:41:24.288703   22837 main.go:141] libmachine: (ha-685475-m02)     </disk>
	I0924 18:41:24.288712   22837 main.go:141] libmachine: (ha-685475-m02)     <disk type='file' device='disk'>
	I0924 18:41:24.288725   22837 main.go:141] libmachine: (ha-685475-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 18:41:24.288738   22837 main.go:141] libmachine: (ha-685475-m02)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/ha-685475-m02.rawdisk'/>
	I0924 18:41:24.288748   22837 main.go:141] libmachine: (ha-685475-m02)       <target dev='hda' bus='virtio'/>
	I0924 18:41:24.288756   22837 main.go:141] libmachine: (ha-685475-m02)     </disk>
	I0924 18:41:24.288767   22837 main.go:141] libmachine: (ha-685475-m02)     <interface type='network'>
	I0924 18:41:24.288778   22837 main.go:141] libmachine: (ha-685475-m02)       <source network='mk-ha-685475'/>
	I0924 18:41:24.288788   22837 main.go:141] libmachine: (ha-685475-m02)       <model type='virtio'/>
	I0924 18:41:24.288796   22837 main.go:141] libmachine: (ha-685475-m02)     </interface>
	I0924 18:41:24.288805   22837 main.go:141] libmachine: (ha-685475-m02)     <interface type='network'>
	I0924 18:41:24.288814   22837 main.go:141] libmachine: (ha-685475-m02)       <source network='default'/>
	I0924 18:41:24.288827   22837 main.go:141] libmachine: (ha-685475-m02)       <model type='virtio'/>
	I0924 18:41:24.288835   22837 main.go:141] libmachine: (ha-685475-m02)     </interface>
	I0924 18:41:24.288848   22837 main.go:141] libmachine: (ha-685475-m02)     <serial type='pty'>
	I0924 18:41:24.288862   22837 main.go:141] libmachine: (ha-685475-m02)       <target port='0'/>
	I0924 18:41:24.288876   22837 main.go:141] libmachine: (ha-685475-m02)     </serial>
	I0924 18:41:24.288885   22837 main.go:141] libmachine: (ha-685475-m02)     <console type='pty'>
	I0924 18:41:24.288892   22837 main.go:141] libmachine: (ha-685475-m02)       <target type='serial' port='0'/>
	I0924 18:41:24.288900   22837 main.go:141] libmachine: (ha-685475-m02)     </console>
	I0924 18:41:24.288911   22837 main.go:141] libmachine: (ha-685475-m02)     <rng model='virtio'>
	I0924 18:41:24.288922   22837 main.go:141] libmachine: (ha-685475-m02)       <backend model='random'>/dev/random</backend>
	I0924 18:41:24.288928   22837 main.go:141] libmachine: (ha-685475-m02)     </rng>
	I0924 18:41:24.288935   22837 main.go:141] libmachine: (ha-685475-m02)     
	I0924 18:41:24.288944   22837 main.go:141] libmachine: (ha-685475-m02)     
	I0924 18:41:24.288956   22837 main.go:141] libmachine: (ha-685475-m02)   </devices>
	I0924 18:41:24.288965   22837 main.go:141] libmachine: (ha-685475-m02) </domain>
	I0924 18:41:24.288975   22837 main.go:141] libmachine: (ha-685475-m02) 
	I0924 18:41:24.294992   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:bf:94:ad in network default
	I0924 18:41:24.295458   22837 main.go:141] libmachine: (ha-685475-m02) Ensuring networks are active...
	I0924 18:41:24.295479   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:24.296154   22837 main.go:141] libmachine: (ha-685475-m02) Ensuring network default is active
	I0924 18:41:24.296453   22837 main.go:141] libmachine: (ha-685475-m02) Ensuring network mk-ha-685475 is active
	I0924 18:41:24.296812   22837 main.go:141] libmachine: (ha-685475-m02) Getting domain xml...
	I0924 18:41:24.297403   22837 main.go:141] libmachine: (ha-685475-m02) Creating domain...
	I0924 18:41:25.511930   22837 main.go:141] libmachine: (ha-685475-m02) Waiting to get IP...
	I0924 18:41:25.512699   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:25.513104   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:25.513143   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:25.513091   23203 retry.go:31] will retry after 234.16067ms: waiting for machine to come up
	I0924 18:41:25.748453   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:25.748989   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:25.749022   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:25.748910   23203 retry.go:31] will retry after 253.354873ms: waiting for machine to come up
	I0924 18:41:26.004434   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:26.004963   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:26.004991   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:26.004930   23203 retry.go:31] will retry after 301.553898ms: waiting for machine to come up
	I0924 18:41:26.308451   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:26.308934   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:26.308961   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:26.308888   23203 retry.go:31] will retry after 500.936612ms: waiting for machine to come up
	I0924 18:41:26.811529   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:26.812030   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:26.812051   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:26.811979   23203 retry.go:31] will retry after 494.430185ms: waiting for machine to come up
	I0924 18:41:27.307617   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:27.308186   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:27.308222   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:27.308158   23203 retry.go:31] will retry after 624.183064ms: waiting for machine to come up
	I0924 18:41:27.933772   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:27.934215   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:27.934243   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:27.934171   23203 retry.go:31] will retry after 1.048717591s: waiting for machine to come up
	I0924 18:41:28.984256   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:28.984722   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:28.984750   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:28.984681   23203 retry.go:31] will retry after 1.344803754s: waiting for machine to come up
	I0924 18:41:30.331184   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:30.331665   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:30.331695   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:30.331611   23203 retry.go:31] will retry after 1.462041717s: waiting for machine to come up
	I0924 18:41:31.796038   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:31.796495   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:31.796521   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:31.796439   23203 retry.go:31] will retry after 1.946036169s: waiting for machine to come up
	I0924 18:41:33.743834   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:33.744264   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:33.744289   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:33.744229   23203 retry.go:31] will retry after 1.953552894s: waiting for machine to come up
	I0924 18:41:35.699784   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:35.700188   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:35.700207   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:35.700142   23203 retry.go:31] will retry after 3.550334074s: waiting for machine to come up
	I0924 18:41:39.251459   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:39.251859   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:39.251883   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:39.251819   23203 retry.go:31] will retry after 3.096214207s: waiting for machine to come up
	I0924 18:41:42.351720   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:42.352147   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find current IP address of domain ha-685475-m02 in network mk-ha-685475
	I0924 18:41:42.352168   22837 main.go:141] libmachine: (ha-685475-m02) DBG | I0924 18:41:42.352109   23203 retry.go:31] will retry after 5.133975311s: waiting for machine to come up
	I0924 18:41:47.489864   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.490368   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has current primary IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.490384   22837 main.go:141] libmachine: (ha-685475-m02) Found IP for machine: 192.168.39.17
	I0924 18:41:47.490392   22837 main.go:141] libmachine: (ha-685475-m02) Reserving static IP address...
	I0924 18:41:47.490898   22837 main.go:141] libmachine: (ha-685475-m02) DBG | unable to find host DHCP lease matching {name: "ha-685475-m02", mac: "52:54:00:c4:34:39", ip: "192.168.39.17"} in network mk-ha-685475
	I0924 18:41:47.562679   22837 main.go:141] libmachine: (ha-685475-m02) Reserved static IP address: 192.168.39.17
	I0924 18:41:47.562701   22837 main.go:141] libmachine: (ha-685475-m02) Waiting for SSH to be available...
	I0924 18:41:47.562710   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Getting to WaitForSSH function...
	I0924 18:41:47.565356   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.565738   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:47.565768   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.565964   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Using SSH client type: external
	I0924 18:41:47.565988   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa (-rw-------)
	I0924 18:41:47.566029   22837 main.go:141] libmachine: (ha-685475-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:41:47.566047   22837 main.go:141] libmachine: (ha-685475-m02) DBG | About to run SSH command:
	I0924 18:41:47.566064   22837 main.go:141] libmachine: (ha-685475-m02) DBG | exit 0
	I0924 18:41:47.686618   22837 main.go:141] libmachine: (ha-685475-m02) DBG | SSH cmd err, output: <nil>: 
	I0924 18:41:47.686909   22837 main.go:141] libmachine: (ha-685475-m02) KVM machine creation complete!
	I0924 18:41:47.687246   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetConfigRaw
	I0924 18:41:47.687732   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:47.687897   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:47.688053   22837 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 18:41:47.688065   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetState
	I0924 18:41:47.689263   22837 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 18:41:47.689278   22837 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 18:41:47.689283   22837 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 18:41:47.689288   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:47.691350   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.691620   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:47.691646   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.691809   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:47.691967   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.692084   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.692218   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:47.692337   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:47.692527   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:47.692540   22837 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 18:41:47.794027   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:41:47.794050   22837 main.go:141] libmachine: Detecting the provisioner...
	I0924 18:41:47.794060   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:47.796879   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.797224   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:47.797254   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.797407   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:47.797704   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.797913   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.798111   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:47.798287   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:47.798451   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:47.798462   22837 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 18:41:47.903254   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 18:41:47.903300   22837 main.go:141] libmachine: found compatible host: buildroot
	I0924 18:41:47.903305   22837 main.go:141] libmachine: Provisioning with buildroot...
	I0924 18:41:47.903313   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetMachineName
	I0924 18:41:47.903564   22837 buildroot.go:166] provisioning hostname "ha-685475-m02"
	I0924 18:41:47.903593   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetMachineName
	I0924 18:41:47.903777   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:47.906337   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.906672   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:47.906694   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:47.906854   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:47.907009   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.907154   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:47.907284   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:47.907446   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:47.907641   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:47.907655   22837 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-685475-m02 && echo "ha-685475-m02" | sudo tee /etc/hostname
	I0924 18:41:48.025784   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685475-m02
	
	I0924 18:41:48.025820   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.028558   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.028880   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.028907   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.029107   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.029274   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.029415   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.029559   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.029722   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:48.029915   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:48.029932   22837 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-685475-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-685475-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-685475-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 18:41:48.139194   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:41:48.139227   22837 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 18:41:48.139248   22837 buildroot.go:174] setting up certificates
	I0924 18:41:48.139267   22837 provision.go:84] configureAuth start
	I0924 18:41:48.139280   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetMachineName
	I0924 18:41:48.139566   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetIP
	I0924 18:41:48.142585   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.143024   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.143053   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.143201   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.145124   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.145481   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.145505   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.145654   22837 provision.go:143] copyHostCerts
	I0924 18:41:48.145692   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:41:48.145726   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 18:41:48.145735   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:41:48.145801   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 18:41:48.145869   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:41:48.145886   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 18:41:48.145891   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:41:48.145915   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 18:41:48.145955   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:41:48.145971   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 18:41:48.145977   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:41:48.145998   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 18:41:48.146040   22837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.ha-685475-m02 san=[127.0.0.1 192.168.39.17 ha-685475-m02 localhost minikube]
	I0924 18:41:48.245573   22837 provision.go:177] copyRemoteCerts
	I0924 18:41:48.245622   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 18:41:48.245643   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.248802   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.249274   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.249306   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.249504   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.249706   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.249847   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.249994   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa Username:docker}
	I0924 18:41:48.328761   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 18:41:48.328834   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 18:41:48.362627   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 18:41:48.362710   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 18:41:48.384868   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 18:41:48.384964   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 18:41:48.408148   22837 provision.go:87] duration metric: took 268.869175ms to configureAuth
	I0924 18:41:48.408177   22837 buildroot.go:189] setting minikube options for container-runtime
	I0924 18:41:48.408340   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:41:48.408409   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.410657   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.411048   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.411073   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.411241   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.411430   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.411632   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.411784   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.411937   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:48.412089   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:48.412102   22837 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 18:41:48.621639   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 18:41:48.621659   22837 main.go:141] libmachine: Checking connection to Docker...
	I0924 18:41:48.621667   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetURL
	I0924 18:41:48.622862   22837 main.go:141] libmachine: (ha-685475-m02) DBG | Using libvirt version 6000000
	I0924 18:41:48.624753   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.625070   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.625087   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.625272   22837 main.go:141] libmachine: Docker is up and running!
	I0924 18:41:48.625285   22837 main.go:141] libmachine: Reticulating splines...
	I0924 18:41:48.625291   22837 client.go:171] duration metric: took 24.799650651s to LocalClient.Create
	I0924 18:41:48.625312   22837 start.go:167] duration metric: took 24.799696127s to libmachine.API.Create "ha-685475"
	I0924 18:41:48.625325   22837 start.go:293] postStartSetup for "ha-685475-m02" (driver="kvm2")
	I0924 18:41:48.625340   22837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:41:48.625360   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:48.625542   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:41:48.625572   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.627676   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.628030   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.628052   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.628180   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.628342   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.628517   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.628659   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa Username:docker}
	I0924 18:41:48.708913   22837 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 18:41:48.712956   22837 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 18:41:48.712978   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 18:41:48.713046   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 18:41:48.713130   22837 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 18:41:48.713141   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /etc/ssl/certs/109492.pem
	I0924 18:41:48.713240   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 18:41:48.722192   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:41:48.744383   22837 start.go:296] duration metric: took 119.042113ms for postStartSetup
	I0924 18:41:48.744432   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetConfigRaw
	I0924 18:41:48.745000   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetIP
	I0924 18:41:48.747573   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.747893   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.747910   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.748162   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:41:48.748334   22837 start.go:128] duration metric: took 24.940306164s to createHost
	I0924 18:41:48.748356   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.750542   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.750887   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.750911   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.751015   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.751176   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.751307   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.751425   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.751593   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:41:48.751774   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0924 18:41:48.751787   22837 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 18:41:48.851074   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727203308.831222046
	
	I0924 18:41:48.851092   22837 fix.go:216] guest clock: 1727203308.831222046
	I0924 18:41:48.851099   22837 fix.go:229] Guest: 2024-09-24 18:41:48.831222046 +0000 UTC Remote: 2024-09-24 18:41:48.748344809 +0000 UTC m=+73.162730067 (delta=82.877237ms)
	I0924 18:41:48.851113   22837 fix.go:200] guest clock delta is within tolerance: 82.877237ms
	I0924 18:41:48.851118   22837 start.go:83] releasing machines lock for "ha-685475-m02", held for 25.043203349s
	I0924 18:41:48.851134   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:48.851348   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetIP
	I0924 18:41:48.853818   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.854112   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.854136   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.856508   22837 out.go:177] * Found network options:
	I0924 18:41:48.857890   22837 out.go:177]   - NO_PROXY=192.168.39.7
	W0924 18:41:48.859133   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 18:41:48.859180   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:48.859668   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:48.859884   22837 main.go:141] libmachine: (ha-685475-m02) Calling .DriverName
	I0924 18:41:48.859962   22837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 18:41:48.860002   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	W0924 18:41:48.860062   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 18:41:48.860122   22837 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 18:41:48.860142   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHHostname
	I0924 18:41:48.862654   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.862677   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.863021   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.863046   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.863071   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:48.863085   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:48.863235   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.863400   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.863436   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHPort
	I0924 18:41:48.863592   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.863623   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHKeyPath
	I0924 18:41:48.863730   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetSSHUsername
	I0924 18:41:48.863735   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa Username:docker}
	I0924 18:41:48.863845   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m02/id_rsa Username:docker}
	I0924 18:41:49.100910   22837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 18:41:49.106567   22837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 18:41:49.106646   22837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:41:49.123612   22837 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 18:41:49.123643   22837 start.go:495] detecting cgroup driver to use...
	I0924 18:41:49.123708   22837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 18:41:49.142937   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 18:41:49.156490   22837 docker.go:217] disabling cri-docker service (if available) ...
	I0924 18:41:49.156545   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 18:41:49.169527   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 18:41:49.182177   22837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 18:41:49.291858   22837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 18:41:49.459326   22837 docker.go:233] disabling docker service ...
	I0924 18:41:49.459396   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 18:41:49.472974   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 18:41:49.485001   22837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 18:41:49.613925   22837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 18:41:49.729893   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 18:41:49.742924   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:41:49.760372   22837 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 18:41:49.760435   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.771854   22837 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 18:41:49.771935   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.783072   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.792955   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.802788   22837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:41:49.813021   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.822734   22837 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.838535   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:41:49.848192   22837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:41:49.856844   22837 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 18:41:49.856899   22837 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 18:41:49.869401   22837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:41:49.878419   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:41:50.004449   22837 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 18:41:50.089923   22837 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 18:41:50.090004   22837 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 18:41:50.094371   22837 start.go:563] Will wait 60s for crictl version
	I0924 18:41:50.094436   22837 ssh_runner.go:195] Run: which crictl
	I0924 18:41:50.097914   22837 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 18:41:50.136366   22837 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 18:41:50.136456   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:41:50.162234   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:41:50.190445   22837 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 18:41:50.191917   22837 out.go:177]   - env NO_PROXY=192.168.39.7
	I0924 18:41:50.193261   22837 main.go:141] libmachine: (ha-685475-m02) Calling .GetIP
	I0924 18:41:50.195868   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:50.196181   22837 main.go:141] libmachine: (ha-685475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:34:39", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:41:37 +0000 UTC Type:0 Mac:52:54:00:c4:34:39 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-685475-m02 Clientid:01:52:54:00:c4:34:39}
	I0924 18:41:50.196210   22837 main.go:141] libmachine: (ha-685475-m02) DBG | domain ha-685475-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:c4:34:39 in network mk-ha-685475
	I0924 18:41:50.196416   22837 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 18:41:50.200556   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:41:50.212678   22837 mustload.go:65] Loading cluster: ha-685475
	I0924 18:41:50.212868   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:41:50.213191   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:50.213221   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:50.227693   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40331
	I0924 18:41:50.228149   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:50.228595   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:50.228613   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:50.228905   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:50.229090   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:41:50.230680   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:41:50.230980   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:50.231004   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:50.244907   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46297
	I0924 18:41:50.245219   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:50.245604   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:50.245626   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:50.245901   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:50.246055   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:50.246187   22837 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475 for IP: 192.168.39.17
	I0924 18:41:50.246201   22837 certs.go:194] generating shared ca certs ...
	I0924 18:41:50.246216   22837 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:50.246327   22837 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 18:41:50.246369   22837 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 18:41:50.246378   22837 certs.go:256] generating profile certs ...
	I0924 18:41:50.246440   22837 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key
	I0924 18:41:50.246464   22837 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.8bbab698
	I0924 18:41:50.246474   22837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.8bbab698 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.17 192.168.39.254]
	I0924 18:41:50.598027   22837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.8bbab698 ...
	I0924 18:41:50.598058   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.8bbab698: {Name:mkf8f0e99ce8df80e2d67426d0c1db2d0002fe45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:50.598227   22837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.8bbab698 ...
	I0924 18:41:50.598240   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.8bbab698: {Name:mk2fd7db9063cce26eb5db83e155e40a1d36f1b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:41:50.598308   22837 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.8bbab698 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt
	I0924 18:41:50.598434   22837 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.8bbab698 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key
	I0924 18:41:50.598561   22837 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key
	I0924 18:41:50.598577   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 18:41:50.598590   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 18:41:50.598601   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 18:41:50.598615   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 18:41:50.598627   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 18:41:50.598639   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 18:41:50.598651   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 18:41:50.598663   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 18:41:50.598707   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 18:41:50.598733   22837 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 18:41:50.598743   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 18:41:50.598763   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 18:41:50.598790   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:41:50.598808   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 18:41:50.598860   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:41:50.598885   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem -> /usr/share/ca-certificates/10949.pem
	I0924 18:41:50.598899   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /usr/share/ca-certificates/109492.pem
	I0924 18:41:50.598912   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:50.598943   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:50.601751   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:50.602261   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:50.602302   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:50.602435   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:50.602632   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:50.602771   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:50.602890   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:50.675173   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0924 18:41:50.679977   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0924 18:41:50.690734   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0924 18:41:50.694531   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0924 18:41:50.704513   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0924 18:41:50.708108   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0924 18:41:50.717272   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0924 18:41:50.721123   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0924 18:41:50.730473   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0924 18:41:50.733963   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0924 18:41:50.742805   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0924 18:41:50.746245   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0924 18:41:50.755896   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:41:50.779844   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 18:41:50.802343   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:41:50.824768   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 18:41:50.846513   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0924 18:41:50.868210   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 18:41:50.890482   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:41:50.912726   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 18:41:50.933992   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 18:41:50.954961   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 18:41:50.976681   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:41:50.999088   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0924 18:41:51.016166   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0924 18:41:51.032873   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0924 18:41:51.047752   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0924 18:41:51.062770   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0924 18:41:51.078108   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0924 18:41:51.093675   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0924 18:41:51.109375   22837 ssh_runner.go:195] Run: openssl version
	I0924 18:41:51.115481   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:41:51.125989   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:51.130012   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:51.130079   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:41:51.135264   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:41:51.144716   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 18:41:51.154096   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 18:41:51.158032   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 18:41:51.158077   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 18:41:51.163212   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 18:41:51.172662   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 18:41:51.182229   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 18:41:51.186313   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 18:41:51.186363   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 18:41:51.191704   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 18:41:51.202091   22837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:41:51.205856   22837 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 18:41:51.205922   22837 kubeadm.go:934] updating node {m02 192.168.39.17 8443 v1.31.1 crio true true} ...
	I0924 18:41:51.206011   22837 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-685475-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 18:41:51.206039   22837 kube-vip.go:115] generating kube-vip config ...
	I0924 18:41:51.206072   22837 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 18:41:51.221517   22837 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 18:41:51.221584   22837 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0924 18:41:51.221651   22837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:41:51.229924   22837 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0924 18:41:51.229982   22837 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0924 18:41:51.238555   22837 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0924 18:41:51.238577   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 18:41:51.238641   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 18:41:51.238665   22837 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0924 18:41:51.238675   22837 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0924 18:41:51.242749   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0924 18:41:51.242771   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0924 18:41:51.999295   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 18:41:51.999376   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 18:41:52.004346   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0924 18:41:52.004382   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0924 18:41:52.162918   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:41:52.197388   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 18:41:52.197497   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 18:41:52.207217   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0924 18:41:52.207268   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0924 18:41:52.538567   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0924 18:41:52.547052   22837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0924 18:41:52.561548   22837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:41:52.576215   22837 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0924 18:41:52.591227   22837 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 18:41:52.594529   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:41:52.604896   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:41:52.719375   22837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:41:52.736097   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:41:52.736483   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:41:52.736538   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:41:52.752065   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36433
	I0924 18:41:52.752444   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:41:52.752959   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:41:52.752982   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:41:52.753304   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:41:52.753474   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:41:52.753613   22837 start.go:317] joinCluster: &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:41:52.753696   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0924 18:41:52.753710   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:41:52.756694   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:52.757114   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:41:52.757131   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:41:52.757308   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:41:52.757468   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:41:52.757629   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:41:52.757745   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:41:52.888925   22837 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:41:52.888975   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7fwv7s.uj3o27m19d4lbaxl --discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-685475-m02 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443"
	I0924 18:42:11.743600   22837 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7fwv7s.uj3o27m19d4lbaxl --discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-685475-m02 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443": (18.8545724s)
	I0924 18:42:11.743651   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0924 18:42:12.256325   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-685475-m02 minikube.k8s.io/updated_at=2024_09_24T18_42_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=ha-685475 minikube.k8s.io/primary=false
	I0924 18:42:12.517923   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-685475-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0924 18:42:12.615905   22837 start.go:319] duration metric: took 19.86228628s to joinCluster
	I0924 18:42:12.616009   22837 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:42:12.616334   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:42:12.617637   22837 out.go:177] * Verifying Kubernetes components...
	I0924 18:42:12.618871   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:42:12.853779   22837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:42:12.878467   22837 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:42:12.878815   22837 kapi.go:59] client config for ha-685475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key", CAFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0924 18:42:12.878931   22837 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.7:8443
	I0924 18:42:12.879186   22837 node_ready.go:35] waiting up to 6m0s for node "ha-685475-m02" to be "Ready" ...
	I0924 18:42:12.879290   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:12.879301   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:12.879309   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:12.879314   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:12.895218   22837 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0924 18:42:13.380409   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:13.380434   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:13.380445   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:13.380450   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:13.385029   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:13.879387   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:13.879410   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:13.879422   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:13.879428   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:13.883592   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:14.380062   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:14.380082   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:14.380090   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:14.380095   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:14.397523   22837 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0924 18:42:14.879492   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:14.879513   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:14.879520   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:14.879526   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:14.882118   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:14.882608   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:15.380119   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:15.380151   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:15.380164   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:15.380170   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:15.383053   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:15.879674   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:15.879694   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:15.879702   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:15.879708   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:15.882714   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:16.379456   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:16.379481   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:16.379490   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:16.379493   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:16.383195   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:16.880066   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:16.880089   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:16.880098   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:16.880105   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:16.882954   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:16.883690   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:17.380052   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:17.380084   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:17.380093   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:17.380096   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:17.384312   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:17.879766   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:17.879786   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:17.879794   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:17.879799   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:17.882650   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:18.379440   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:18.379460   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:18.379468   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:18.379474   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:18.382655   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:18.879894   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:18.879916   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:18.879925   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:18.879931   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:18.883892   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:18.884363   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:19.379514   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:19.379537   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:19.379549   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:19.379555   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:19.383053   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:19.880045   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:19.880066   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:19.880075   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:19.880080   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:19.883375   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:20.380221   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:20.380247   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:20.380256   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:20.380261   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:20.383167   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:20.879751   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:20.879771   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:20.879780   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:20.879784   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:20.883632   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:21.379420   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:21.379440   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:21.379449   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:21.379454   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:21.382852   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:21.383642   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:21.880087   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:21.880120   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:21.880142   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:21.880147   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:21.883894   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:22.379995   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:22.380016   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:22.380024   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:22.380028   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:22.383198   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:22.879355   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:22.879379   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:22.879389   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:22.879394   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:22.882598   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:23.380170   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:23.380191   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:23.380198   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:23.380201   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:23.383280   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:23.383852   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:23.879484   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:23.879505   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:23.879514   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:23.879518   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:23.882485   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:24.380050   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:24.380072   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:24.380080   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:24.380084   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:24.383563   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:24.880157   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:24.880189   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:24.880201   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:24.880208   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:24.883633   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:25.379493   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:25.379514   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:25.379522   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:25.379527   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:25.382668   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:25.880369   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:25.880389   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:25.880398   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:25.880401   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:25.884483   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:25.884968   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:26.380398   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:26.380418   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:26.380426   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:26.380431   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:26.384043   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:26.880095   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:26.880120   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:26.880131   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:26.880136   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:26.884191   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:27.380154   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:27.380180   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:27.380192   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:27.380199   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:27.383272   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:27.879506   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:27.879528   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:27.879539   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:27.879556   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:27.882360   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:28.380188   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:28.380208   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:28.380217   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:28.380222   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:28.383324   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:28.384179   22837 node_ready.go:53] node "ha-685475-m02" has status "Ready":"False"
	I0924 18:42:28.880029   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:28.880052   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:28.880064   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:28.880072   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:28.883130   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:29.380071   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:29.380098   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:29.380110   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:29.380117   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:29.383220   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:29.880044   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:29.880064   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:29.880072   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:29.880077   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:29.883469   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:30.379846   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:30.379865   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.379873   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.379877   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.382760   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.880337   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:30.880358   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.880367   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.880371   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.883587   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:30.884005   22837 node_ready.go:49] node "ha-685475-m02" has status "Ready":"True"
	I0924 18:42:30.884024   22837 node_ready.go:38] duration metric: took 18.004817095s for node "ha-685475-m02" to be "Ready" ...
	I0924 18:42:30.884035   22837 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:42:30.884109   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:42:30.884120   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.884130   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.884136   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.889226   22837 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 18:42:30.898516   22837 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.898598   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fchhl
	I0924 18:42:30.898608   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.898616   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.898621   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.901236   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.901749   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:30.901762   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.901769   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.901773   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.903992   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.904550   22837 pod_ready.go:93] pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:30.904563   22837 pod_ready.go:82] duration metric: took 6.024673ms for pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.904570   22837 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.904619   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jf7wr
	I0924 18:42:30.904627   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.904634   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.904639   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.907019   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.907540   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:30.907554   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.907560   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.907564   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.909829   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.910347   22837 pod_ready.go:93] pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:30.910361   22837 pod_ready.go:82] duration metric: took 5.783749ms for pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.910369   22837 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.910412   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475
	I0924 18:42:30.910421   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.910427   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.910431   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.912745   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.913606   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:30.913622   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.913632   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.913639   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.916274   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.916867   22837 pod_ready.go:93] pod "etcd-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:30.916881   22837 pod_ready.go:82] duration metric: took 6.50607ms for pod "etcd-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.916889   22837 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.916939   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475-m02
	I0924 18:42:30.916948   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.916955   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.916960   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.919434   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:30.919982   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:30.919996   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:30.920003   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:30.920007   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:30.921770   22837 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0924 18:42:30.922347   22837 pod_ready.go:93] pod "etcd-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:30.922367   22837 pod_ready.go:82] duration metric: took 5.471344ms for pod "etcd-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:30.922386   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:31.080824   22837 request.go:632] Waited for 158.3458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475
	I0924 18:42:31.080885   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475
	I0924 18:42:31.080893   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:31.080904   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:31.080910   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:31.084145   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:31.281150   22837 request.go:632] Waited for 196.368053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:31.281219   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:31.281226   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:31.281237   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:31.281243   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:31.284822   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:31.285606   22837 pod_ready.go:93] pod "kube-apiserver-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:31.285626   22837 pod_ready.go:82] duration metric: took 363.227315ms for pod "kube-apiserver-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:31.285638   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:31.480778   22837 request.go:632] Waited for 195.072153ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m02
	I0924 18:42:31.480848   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m02
	I0924 18:42:31.480855   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:31.480868   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:31.480875   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:31.484120   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:31.681047   22837 request.go:632] Waited for 196.341286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:31.681125   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:31.681133   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:31.681148   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:31.681151   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:31.684093   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:31.684648   22837 pod_ready.go:93] pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:31.684666   22837 pod_ready.go:82] duration metric: took 399.019878ms for pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:31.684678   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:31.880772   22837 request.go:632] Waited for 196.018851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475
	I0924 18:42:31.880838   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475
	I0924 18:42:31.880846   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:31.880865   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:31.880873   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:31.884578   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:32.080481   22837 request.go:632] Waited for 195.272795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:32.080548   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:32.080556   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:32.080567   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:32.080574   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:32.083669   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:32.084153   22837 pod_ready.go:93] pod "kube-controller-manager-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:32.084170   22837 pod_ready.go:82] duration metric: took 399.485153ms for pod "kube-controller-manager-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:32.084179   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:32.281286   22837 request.go:632] Waited for 197.043639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m02
	I0924 18:42:32.281361   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m02
	I0924 18:42:32.281367   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:32.281374   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:32.281379   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:32.284317   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:32.481341   22837 request.go:632] Waited for 196.394211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:32.481408   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:32.481414   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:32.481423   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:32.481426   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:32.484712   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:32.485108   22837 pod_ready.go:93] pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:32.485126   22837 pod_ready.go:82] duration metric: took 400.941479ms for pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:32.485135   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b8x2w" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:32.681315   22837 request.go:632] Waited for 196.100251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8x2w
	I0924 18:42:32.681368   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8x2w
	I0924 18:42:32.681374   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:32.681382   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:32.681387   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:32.684555   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:32.880797   22837 request.go:632] Waited for 195.427595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:32.880867   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:32.880875   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:32.880886   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:32.880916   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:32.884757   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:32.885225   22837 pod_ready.go:93] pod "kube-proxy-b8x2w" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:32.885244   22837 pod_ready.go:82] duration metric: took 400.103235ms for pod "kube-proxy-b8x2w" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:32.885253   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dlr8f" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:33.080631   22837 request.go:632] Waited for 195.310618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlr8f
	I0924 18:42:33.080696   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlr8f
	I0924 18:42:33.080703   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:33.080712   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:33.080718   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:33.084028   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:33.281072   22837 request.go:632] Waited for 196.37227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:33.281123   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:33.281128   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:33.281136   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:33.281140   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:33.284485   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:33.285140   22837 pod_ready.go:93] pod "kube-proxy-dlr8f" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:33.285160   22837 pod_ready.go:82] duration metric: took 399.900589ms for pod "kube-proxy-dlr8f" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:33.285169   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:33.481228   22837 request.go:632] Waited for 196.007394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475
	I0924 18:42:33.481285   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475
	I0924 18:42:33.481290   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:33.481297   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:33.481301   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:33.484526   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:33.680916   22837 request.go:632] Waited for 195.378531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:33.681003   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:42:33.681014   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:33.681027   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:33.681033   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:33.683790   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:33.684472   22837 pod_ready.go:93] pod "kube-scheduler-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:33.684489   22837 pod_ready.go:82] duration metric: took 399.314616ms for pod "kube-scheduler-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:33.684498   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:33.880975   22837 request.go:632] Waited for 196.408433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m02
	I0924 18:42:33.881026   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m02
	I0924 18:42:33.881031   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:33.881038   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:33.881043   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:33.884212   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:34.081232   22837 request.go:632] Waited for 196.342139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:34.081301   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:42:34.081312   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.081340   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.081347   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.084215   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:42:34.084885   22837 pod_ready.go:93] pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:42:34.084905   22837 pod_ready.go:82] duration metric: took 400.399835ms for pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:42:34.084918   22837 pod_ready.go:39] duration metric: took 3.200860786s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:42:34.084956   22837 api_server.go:52] waiting for apiserver process to appear ...
	I0924 18:42:34.085018   22837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:42:34.099253   22837 api_server.go:72] duration metric: took 21.483198905s to wait for apiserver process to appear ...
	I0924 18:42:34.099269   22837 api_server.go:88] waiting for apiserver healthz status ...
	I0924 18:42:34.099293   22837 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0924 18:42:34.103172   22837 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0924 18:42:34.103230   22837 round_trippers.go:463] GET https://192.168.39.7:8443/version
	I0924 18:42:34.103238   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.103245   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.103249   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.104031   22837 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0924 18:42:34.104219   22837 api_server.go:141] control plane version: v1.31.1
	I0924 18:42:34.104236   22837 api_server.go:131] duration metric: took 4.961214ms to wait for apiserver health ...
	I0924 18:42:34.104242   22837 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 18:42:34.280630   22837 request.go:632] Waited for 176.320456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:42:34.280681   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:42:34.280686   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.280694   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.280697   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.284696   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:34.289267   22837 system_pods.go:59] 17 kube-system pods found
	I0924 18:42:34.289298   22837 system_pods.go:61] "coredns-7c65d6cfc9-fchhl" [dc58fefc-6210-4b70-bd0d-dbf5b093e09a] Running
	I0924 18:42:34.289303   22837 system_pods.go:61] "coredns-7c65d6cfc9-jf7wr" [a616493e-082e-4ae6-8e12-8c4a2b37a985] Running
	I0924 18:42:34.289307   22837 system_pods.go:61] "etcd-ha-685475" [f76413e6-46f1-4914-9ba4-719c8f2b098b] Running
	I0924 18:42:34.289312   22837 system_pods.go:61] "etcd-ha-685475-m02" [f37ad824-aa9c-42e9-b9fa-82423aab2a30] Running
	I0924 18:42:34.289315   22837 system_pods.go:61] "kindnet-ms6qb" [60485f55-3830-4897-b38e-55779662b999] Running
	I0924 18:42:34.289318   22837 system_pods.go:61] "kindnet-pwvfj" [e47e9124-c023-41f2-8b05-5fde3cf09dc1] Running
	I0924 18:42:34.289322   22837 system_pods.go:61] "kube-apiserver-ha-685475" [f7dc1ef7-fba6-48c4-8868-de5eccdbbea3] Running
	I0924 18:42:34.289325   22837 system_pods.go:61] "kube-apiserver-ha-685475-m02" [96b5dd69-0cc4-42d9-a42e-b1665ab1890a] Running
	I0924 18:42:34.289329   22837 system_pods.go:61] "kube-controller-manager-ha-685475" [3d40caef-e1c5-4e4b-9908-cf2767bb686f] Running
	I0924 18:42:34.289333   22837 system_pods.go:61] "kube-controller-manager-ha-685475-m02" [0fb0ca36-0340-49f7-8c5d-acf933c181ad] Running
	I0924 18:42:34.289335   22837 system_pods.go:61] "kube-proxy-b8x2w" [95e65f4e-7461-479a-8743-ce4f891abfcf] Running
	I0924 18:42:34.289339   22837 system_pods.go:61] "kube-proxy-dlr8f" [e463fdb8-b27f-4e4a-8887-6534c92a21aa] Running
	I0924 18:42:34.289341   22837 system_pods.go:61] "kube-scheduler-ha-685475" [b82f1f3f-4c7a-49b3-9dab-ba6dfdd3c2ed] Running
	I0924 18:42:34.289344   22837 system_pods.go:61] "kube-scheduler-ha-685475-m02" [53e1a4b3-4e3a-4d14-9cdf-eedbf83877b4] Running
	I0924 18:42:34.289351   22837 system_pods.go:61] "kube-vip-ha-685475" [ad2ed915-5276-4ba2-b097-df9074e8c2ef] Running
	I0924 18:42:34.289355   22837 system_pods.go:61] "kube-vip-ha-685475-m02" [916f0d4d-70d4-4347-9337-84e5c77ca834] Running
	I0924 18:42:34.289357   22837 system_pods.go:61] "storage-provisioner" [e0f5497a-ae6d-4051-b1bc-c84c91d0fd12] Running
	I0924 18:42:34.289363   22837 system_pods.go:74] duration metric: took 185.114229ms to wait for pod list to return data ...
	I0924 18:42:34.289371   22837 default_sa.go:34] waiting for default service account to be created ...
	I0924 18:42:34.480833   22837 request.go:632] Waited for 191.389799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0924 18:42:34.480905   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0924 18:42:34.480912   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.480920   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.480925   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.484374   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:34.484575   22837 default_sa.go:45] found service account: "default"
	I0924 18:42:34.484590   22837 default_sa.go:55] duration metric: took 195.213451ms for default service account to be created ...
	I0924 18:42:34.484598   22837 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 18:42:34.681020   22837 request.go:632] Waited for 196.354693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:42:34.681092   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:42:34.681097   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.681105   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.681113   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.685266   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:42:34.689541   22837 system_pods.go:86] 17 kube-system pods found
	I0924 18:42:34.689565   22837 system_pods.go:89] "coredns-7c65d6cfc9-fchhl" [dc58fefc-6210-4b70-bd0d-dbf5b093e09a] Running
	I0924 18:42:34.689571   22837 system_pods.go:89] "coredns-7c65d6cfc9-jf7wr" [a616493e-082e-4ae6-8e12-8c4a2b37a985] Running
	I0924 18:42:34.689574   22837 system_pods.go:89] "etcd-ha-685475" [f76413e6-46f1-4914-9ba4-719c8f2b098b] Running
	I0924 18:42:34.689578   22837 system_pods.go:89] "etcd-ha-685475-m02" [f37ad824-aa9c-42e9-b9fa-82423aab2a30] Running
	I0924 18:42:34.689581   22837 system_pods.go:89] "kindnet-ms6qb" [60485f55-3830-4897-b38e-55779662b999] Running
	I0924 18:42:34.689585   22837 system_pods.go:89] "kindnet-pwvfj" [e47e9124-c023-41f2-8b05-5fde3cf09dc1] Running
	I0924 18:42:34.689588   22837 system_pods.go:89] "kube-apiserver-ha-685475" [f7dc1ef7-fba6-48c4-8868-de5eccdbbea3] Running
	I0924 18:42:34.689593   22837 system_pods.go:89] "kube-apiserver-ha-685475-m02" [96b5dd69-0cc4-42d9-a42e-b1665ab1890a] Running
	I0924 18:42:34.689598   22837 system_pods.go:89] "kube-controller-manager-ha-685475" [3d40caef-e1c5-4e4b-9908-cf2767bb686f] Running
	I0924 18:42:34.689603   22837 system_pods.go:89] "kube-controller-manager-ha-685475-m02" [0fb0ca36-0340-49f7-8c5d-acf933c181ad] Running
	I0924 18:42:34.689608   22837 system_pods.go:89] "kube-proxy-b8x2w" [95e65f4e-7461-479a-8743-ce4f891abfcf] Running
	I0924 18:42:34.689616   22837 system_pods.go:89] "kube-proxy-dlr8f" [e463fdb8-b27f-4e4a-8887-6534c92a21aa] Running
	I0924 18:42:34.689623   22837 system_pods.go:89] "kube-scheduler-ha-685475" [b82f1f3f-4c7a-49b3-9dab-ba6dfdd3c2ed] Running
	I0924 18:42:34.689633   22837 system_pods.go:89] "kube-scheduler-ha-685475-m02" [53e1a4b3-4e3a-4d14-9cdf-eedbf83877b4] Running
	I0924 18:42:34.689638   22837 system_pods.go:89] "kube-vip-ha-685475" [ad2ed915-5276-4ba2-b097-df9074e8c2ef] Running
	I0924 18:42:34.689642   22837 system_pods.go:89] "kube-vip-ha-685475-m02" [916f0d4d-70d4-4347-9337-84e5c77ca834] Running
	I0924 18:42:34.689646   22837 system_pods.go:89] "storage-provisioner" [e0f5497a-ae6d-4051-b1bc-c84c91d0fd12] Running
	I0924 18:42:34.689652   22837 system_pods.go:126] duration metric: took 205.048658ms to wait for k8s-apps to be running ...
	I0924 18:42:34.689667   22837 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 18:42:34.689711   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:42:34.702696   22837 system_svc.go:56] duration metric: took 13.022824ms WaitForService to wait for kubelet
	I0924 18:42:34.702718   22837 kubeadm.go:582] duration metric: took 22.086667119s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:42:34.702741   22837 node_conditions.go:102] verifying NodePressure condition ...
	I0924 18:42:34.881196   22837 request.go:632] Waited for 178.393564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes
	I0924 18:42:34.881289   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes
	I0924 18:42:34.881300   22837 round_trippers.go:469] Request Headers:
	I0924 18:42:34.881308   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:42:34.881314   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:42:34.885104   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:42:34.885818   22837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:42:34.885841   22837 node_conditions.go:123] node cpu capacity is 2
	I0924 18:42:34.885858   22837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:42:34.885862   22837 node_conditions.go:123] node cpu capacity is 2
	I0924 18:42:34.885866   22837 node_conditions.go:105] duration metric: took 183.120221ms to run NodePressure ...
	I0924 18:42:34.885879   22837 start.go:241] waiting for startup goroutines ...
	I0924 18:42:34.885917   22837 start.go:255] writing updated cluster config ...
	I0924 18:42:34.888071   22837 out.go:201] 
	I0924 18:42:34.889729   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:42:34.889845   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:42:34.891554   22837 out.go:177] * Starting "ha-685475-m03" control-plane node in "ha-685475" cluster
	I0924 18:42:34.893081   22837 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:42:34.893105   22837 cache.go:56] Caching tarball of preloaded images
	I0924 18:42:34.893223   22837 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 18:42:34.893237   22837 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 18:42:34.893331   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:42:34.893543   22837 start.go:360] acquireMachinesLock for ha-685475-m03: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 18:42:34.893593   22837 start.go:364] duration metric: took 31.193µs to acquireMachinesLock for "ha-685475-m03"
	I0924 18:42:34.893622   22837 start.go:93] Provisioning new machine with config: &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-g
adget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:42:34.893742   22837 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0924 18:42:34.895364   22837 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 18:42:34.895477   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:42:34.895520   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:42:34.910309   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36349
	I0924 18:42:34.910707   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:42:34.911166   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:42:34.911189   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:42:34.911445   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:42:34.911666   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetMachineName
	I0924 18:42:34.911812   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:42:34.911970   22837 start.go:159] libmachine.API.Create for "ha-685475" (driver="kvm2")
	I0924 18:42:34.912006   22837 client.go:168] LocalClient.Create starting
	I0924 18:42:34.912049   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem
	I0924 18:42:34.912087   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:42:34.912107   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:42:34.912168   22837 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem
	I0924 18:42:34.912193   22837 main.go:141] libmachine: Decoding PEM data...
	I0924 18:42:34.912206   22837 main.go:141] libmachine: Parsing certificate...
	I0924 18:42:34.912226   22837 main.go:141] libmachine: Running pre-create checks...
	I0924 18:42:34.912234   22837 main.go:141] libmachine: (ha-685475-m03) Calling .PreCreateCheck
	I0924 18:42:34.912354   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetConfigRaw
	I0924 18:42:34.912664   22837 main.go:141] libmachine: Creating machine...
	I0924 18:42:34.912675   22837 main.go:141] libmachine: (ha-685475-m03) Calling .Create
	I0924 18:42:34.912804   22837 main.go:141] libmachine: (ha-685475-m03) Creating KVM machine...
	I0924 18:42:34.914072   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found existing default KVM network
	I0924 18:42:34.914216   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found existing private KVM network mk-ha-685475
	I0924 18:42:34.914343   22837 main.go:141] libmachine: (ha-685475-m03) Setting up store path in /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03 ...
	I0924 18:42:34.914367   22837 main.go:141] libmachine: (ha-685475-m03) Building disk image from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 18:42:34.914418   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:34.914332   23604 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:42:34.914495   22837 main.go:141] libmachine: (ha-685475-m03) Downloading /home/jenkins/minikube-integration/19700-3751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 18:42:35.139279   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:35.139122   23604 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa...
	I0924 18:42:35.223317   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:35.223211   23604 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/ha-685475-m03.rawdisk...
	I0924 18:42:35.223345   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Writing magic tar header
	I0924 18:42:35.223358   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Writing SSH key tar header
	I0924 18:42:35.223365   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:35.223334   23604 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03 ...
	I0924 18:42:35.223430   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03
	I0924 18:42:35.223477   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03 (perms=drwx------)
	I0924 18:42:35.223494   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines
	I0924 18:42:35.223501   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines (perms=drwxr-xr-x)
	I0924 18:42:35.223508   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:42:35.223518   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube (perms=drwxr-xr-x)
	I0924 18:42:35.223529   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751 (perms=drwxrwxr-x)
	I0924 18:42:35.223535   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 18:42:35.223544   22837 main.go:141] libmachine: (ha-685475-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 18:42:35.223549   22837 main.go:141] libmachine: (ha-685475-m03) Creating domain...
	I0924 18:42:35.223557   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751
	I0924 18:42:35.223562   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 18:42:35.223568   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home/jenkins
	I0924 18:42:35.223575   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Checking permissions on dir: /home
	I0924 18:42:35.223580   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Skipping /home - not owner
	I0924 18:42:35.224656   22837 main.go:141] libmachine: (ha-685475-m03) define libvirt domain using xml: 
	I0924 18:42:35.224680   22837 main.go:141] libmachine: (ha-685475-m03) <domain type='kvm'>
	I0924 18:42:35.224689   22837 main.go:141] libmachine: (ha-685475-m03)   <name>ha-685475-m03</name>
	I0924 18:42:35.224694   22837 main.go:141] libmachine: (ha-685475-m03)   <memory unit='MiB'>2200</memory>
	I0924 18:42:35.224699   22837 main.go:141] libmachine: (ha-685475-m03)   <vcpu>2</vcpu>
	I0924 18:42:35.224704   22837 main.go:141] libmachine: (ha-685475-m03)   <features>
	I0924 18:42:35.224709   22837 main.go:141] libmachine: (ha-685475-m03)     <acpi/>
	I0924 18:42:35.224713   22837 main.go:141] libmachine: (ha-685475-m03)     <apic/>
	I0924 18:42:35.224718   22837 main.go:141] libmachine: (ha-685475-m03)     <pae/>
	I0924 18:42:35.224722   22837 main.go:141] libmachine: (ha-685475-m03)     
	I0924 18:42:35.224730   22837 main.go:141] libmachine: (ha-685475-m03)   </features>
	I0924 18:42:35.224736   22837 main.go:141] libmachine: (ha-685475-m03)   <cpu mode='host-passthrough'>
	I0924 18:42:35.224742   22837 main.go:141] libmachine: (ha-685475-m03)   
	I0924 18:42:35.224746   22837 main.go:141] libmachine: (ha-685475-m03)   </cpu>
	I0924 18:42:35.224750   22837 main.go:141] libmachine: (ha-685475-m03)   <os>
	I0924 18:42:35.224756   22837 main.go:141] libmachine: (ha-685475-m03)     <type>hvm</type>
	I0924 18:42:35.224761   22837 main.go:141] libmachine: (ha-685475-m03)     <boot dev='cdrom'/>
	I0924 18:42:35.224770   22837 main.go:141] libmachine: (ha-685475-m03)     <boot dev='hd'/>
	I0924 18:42:35.224784   22837 main.go:141] libmachine: (ha-685475-m03)     <bootmenu enable='no'/>
	I0924 18:42:35.224794   22837 main.go:141] libmachine: (ha-685475-m03)   </os>
	I0924 18:42:35.224799   22837 main.go:141] libmachine: (ha-685475-m03)   <devices>
	I0924 18:42:35.224808   22837 main.go:141] libmachine: (ha-685475-m03)     <disk type='file' device='cdrom'>
	I0924 18:42:35.224840   22837 main.go:141] libmachine: (ha-685475-m03)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/boot2docker.iso'/>
	I0924 18:42:35.224861   22837 main.go:141] libmachine: (ha-685475-m03)       <target dev='hdc' bus='scsi'/>
	I0924 18:42:35.224871   22837 main.go:141] libmachine: (ha-685475-m03)       <readonly/>
	I0924 18:42:35.224885   22837 main.go:141] libmachine: (ha-685475-m03)     </disk>
	I0924 18:42:35.224898   22837 main.go:141] libmachine: (ha-685475-m03)     <disk type='file' device='disk'>
	I0924 18:42:35.224908   22837 main.go:141] libmachine: (ha-685475-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 18:42:35.224920   22837 main.go:141] libmachine: (ha-685475-m03)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/ha-685475-m03.rawdisk'/>
	I0924 18:42:35.224939   22837 main.go:141] libmachine: (ha-685475-m03)       <target dev='hda' bus='virtio'/>
	I0924 18:42:35.224949   22837 main.go:141] libmachine: (ha-685475-m03)     </disk>
	I0924 18:42:35.224954   22837 main.go:141] libmachine: (ha-685475-m03)     <interface type='network'>
	I0924 18:42:35.225004   22837 main.go:141] libmachine: (ha-685475-m03)       <source network='mk-ha-685475'/>
	I0924 18:42:35.225029   22837 main.go:141] libmachine: (ha-685475-m03)       <model type='virtio'/>
	I0924 18:42:35.225048   22837 main.go:141] libmachine: (ha-685475-m03)     </interface>
	I0924 18:42:35.225067   22837 main.go:141] libmachine: (ha-685475-m03)     <interface type='network'>
	I0924 18:42:35.225079   22837 main.go:141] libmachine: (ha-685475-m03)       <source network='default'/>
	I0924 18:42:35.225088   22837 main.go:141] libmachine: (ha-685475-m03)       <model type='virtio'/>
	I0924 18:42:35.225094   22837 main.go:141] libmachine: (ha-685475-m03)     </interface>
	I0924 18:42:35.225101   22837 main.go:141] libmachine: (ha-685475-m03)     <serial type='pty'>
	I0924 18:42:35.225106   22837 main.go:141] libmachine: (ha-685475-m03)       <target port='0'/>
	I0924 18:42:35.225112   22837 main.go:141] libmachine: (ha-685475-m03)     </serial>
	I0924 18:42:35.225118   22837 main.go:141] libmachine: (ha-685475-m03)     <console type='pty'>
	I0924 18:42:35.225124   22837 main.go:141] libmachine: (ha-685475-m03)       <target type='serial' port='0'/>
	I0924 18:42:35.225131   22837 main.go:141] libmachine: (ha-685475-m03)     </console>
	I0924 18:42:35.225144   22837 main.go:141] libmachine: (ha-685475-m03)     <rng model='virtio'>
	I0924 18:42:35.225156   22837 main.go:141] libmachine: (ha-685475-m03)       <backend model='random'>/dev/random</backend>
	I0924 18:42:35.225167   22837 main.go:141] libmachine: (ha-685475-m03)     </rng>
	I0924 18:42:35.225176   22837 main.go:141] libmachine: (ha-685475-m03)     
	I0924 18:42:35.225183   22837 main.go:141] libmachine: (ha-685475-m03)     
	I0924 18:42:35.225192   22837 main.go:141] libmachine: (ha-685475-m03)   </devices>
	I0924 18:42:35.225202   22837 main.go:141] libmachine: (ha-685475-m03) </domain>
	I0924 18:42:35.225210   22837 main.go:141] libmachine: (ha-685475-m03) 
	I0924 18:42:35.232041   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:d0:37:5a in network default
	I0924 18:42:35.232661   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:35.232681   22837 main.go:141] libmachine: (ha-685475-m03) Ensuring networks are active...
	I0924 18:42:35.233409   22837 main.go:141] libmachine: (ha-685475-m03) Ensuring network default is active
	I0924 18:42:35.233744   22837 main.go:141] libmachine: (ha-685475-m03) Ensuring network mk-ha-685475 is active
	I0924 18:42:35.234266   22837 main.go:141] libmachine: (ha-685475-m03) Getting domain xml...
	I0924 18:42:35.235093   22837 main.go:141] libmachine: (ha-685475-m03) Creating domain...
	I0924 18:42:36.442620   22837 main.go:141] libmachine: (ha-685475-m03) Waiting to get IP...
	I0924 18:42:36.443397   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:36.443765   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:36.443802   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:36.443732   23604 retry.go:31] will retry after 244.798943ms: waiting for machine to come up
	I0924 18:42:36.690206   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:36.690698   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:36.690720   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:36.690654   23604 retry.go:31] will retry after 308.672235ms: waiting for machine to come up
	I0924 18:42:37.000890   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:37.001339   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:37.001369   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:37.001302   23604 retry.go:31] will retry after 346.180057ms: waiting for machine to come up
	I0924 18:42:37.348700   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:37.349107   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:37.349134   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:37.349075   23604 retry.go:31] will retry after 530.317337ms: waiting for machine to come up
	I0924 18:42:37.881459   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:37.882098   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:37.882122   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:37.882050   23604 retry.go:31] will retry after 620.764429ms: waiting for machine to come up
	I0924 18:42:38.504892   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:38.505327   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:38.505356   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:38.505288   23604 retry.go:31] will retry after 656.642966ms: waiting for machine to come up
	I0924 18:42:39.163234   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:39.163670   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:39.163696   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:39.163622   23604 retry.go:31] will retry after 804.533823ms: waiting for machine to come up
	I0924 18:42:39.969249   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:39.969758   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:39.969781   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:39.969719   23604 retry.go:31] will retry after 1.112599979s: waiting for machine to come up
	I0924 18:42:41.083861   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:41.084304   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:41.084326   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:41.084250   23604 retry.go:31] will retry after 1.484881709s: waiting for machine to come up
	I0924 18:42:42.570773   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:42.571260   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:42.571291   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:42.571214   23604 retry.go:31] will retry after 1.470650116s: waiting for machine to come up
	I0924 18:42:44.043746   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:44.044161   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:44.044186   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:44.044127   23604 retry.go:31] will retry after 2.749899674s: waiting for machine to come up
	I0924 18:42:46.796154   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:46.796548   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:46.796586   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:46.796499   23604 retry.go:31] will retry after 2.668083753s: waiting for machine to come up
	I0924 18:42:49.467725   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:49.468171   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:49.468196   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:49.468125   23604 retry.go:31] will retry after 4.505913039s: waiting for machine to come up
	I0924 18:42:53.976055   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:53.976513   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find current IP address of domain ha-685475-m03 in network mk-ha-685475
	I0924 18:42:53.976533   22837 main.go:141] libmachine: (ha-685475-m03) DBG | I0924 18:42:53.976473   23604 retry.go:31] will retry after 5.05928848s: waiting for machine to come up
	I0924 18:42:59.039895   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.040268   22837 main.go:141] libmachine: (ha-685475-m03) Found IP for machine: 192.168.39.84
	I0924 18:42:59.040292   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has current primary IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.040302   22837 main.go:141] libmachine: (ha-685475-m03) Reserving static IP address...
	I0924 18:42:59.040633   22837 main.go:141] libmachine: (ha-685475-m03) DBG | unable to find host DHCP lease matching {name: "ha-685475-m03", mac: "52:54:00:47:f3:5c", ip: "192.168.39.84"} in network mk-ha-685475
	I0924 18:42:59.109971   22837 main.go:141] libmachine: (ha-685475-m03) Reserved static IP address: 192.168.39.84
	I0924 18:42:59.110001   22837 main.go:141] libmachine: (ha-685475-m03) Waiting for SSH to be available...
	I0924 18:42:59.110011   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Getting to WaitForSSH function...
	I0924 18:42:59.112837   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.113218   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:minikube Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.113243   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.113377   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Using SSH client type: external
	I0924 18:42:59.113400   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa (-rw-------)
	I0924 18:42:59.113429   22837 main.go:141] libmachine: (ha-685475-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 18:42:59.113441   22837 main.go:141] libmachine: (ha-685475-m03) DBG | About to run SSH command:
	I0924 18:42:59.113458   22837 main.go:141] libmachine: (ha-685475-m03) DBG | exit 0
	I0924 18:42:59.234787   22837 main.go:141] libmachine: (ha-685475-m03) DBG | SSH cmd err, output: <nil>: 
	I0924 18:42:59.235096   22837 main.go:141] libmachine: (ha-685475-m03) KVM machine creation complete!
	I0924 18:42:59.235444   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetConfigRaw
	I0924 18:42:59.235990   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:42:59.236156   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:42:59.236834   22837 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 18:42:59.236851   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetState
	I0924 18:42:59.238058   22837 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 18:42:59.238082   22837 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 18:42:59.238089   22837 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 18:42:59.238099   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:42:59.241168   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.241742   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.241769   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.241929   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:42:59.242092   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.242231   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.242340   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:42:59.242506   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:42:59.242695   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:42:59.242706   22837 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 18:42:59.337829   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:42:59.337850   22837 main.go:141] libmachine: Detecting the provisioner...
	I0924 18:42:59.337860   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:42:59.340431   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.340774   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.340806   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.340930   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:42:59.341115   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.341253   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.341386   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:42:59.341535   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:42:59.341719   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:42:59.341733   22837 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 18:42:59.439659   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 18:42:59.439743   22837 main.go:141] libmachine: found compatible host: buildroot
	I0924 18:42:59.439756   22837 main.go:141] libmachine: Provisioning with buildroot...
	I0924 18:42:59.439767   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetMachineName
	I0924 18:42:59.440013   22837 buildroot.go:166] provisioning hostname "ha-685475-m03"
	I0924 18:42:59.440035   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetMachineName
	I0924 18:42:59.440208   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:42:59.443110   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.443453   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.443484   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.443628   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:42:59.443776   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.443925   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.444043   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:42:59.444195   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:42:59.444388   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:42:59.444405   22837 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-685475-m03 && echo "ha-685475-m03" | sudo tee /etc/hostname
	I0924 18:42:59.552104   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685475-m03
	
	I0924 18:42:59.552146   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:42:59.555198   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.555610   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.555635   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.555825   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:42:59.555999   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.556210   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:42:59.556377   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:42:59.556530   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:42:59.556692   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:42:59.556725   22837 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-685475-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-685475-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-685475-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 18:42:59.663026   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:42:59.663065   22837 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 18:42:59.663091   22837 buildroot.go:174] setting up certificates
	I0924 18:42:59.663104   22837 provision.go:84] configureAuth start
	I0924 18:42:59.663128   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetMachineName
	I0924 18:42:59.663405   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetIP
	I0924 18:42:59.666046   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.666433   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.666453   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.666616   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:42:59.668726   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.669069   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:42:59.669093   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:42:59.669219   22837 provision.go:143] copyHostCerts
	I0924 18:42:59.669250   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:42:59.669289   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 18:42:59.669299   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:42:59.669379   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 18:42:59.669484   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:42:59.669511   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 18:42:59.669521   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:42:59.669559   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 18:42:59.669627   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:42:59.669655   22837 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 18:42:59.669664   22837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:42:59.669698   22837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 18:42:59.669771   22837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.ha-685475-m03 san=[127.0.0.1 192.168.39.84 ha-685475-m03 localhost minikube]
	I0924 18:43:00.034638   22837 provision.go:177] copyRemoteCerts
	I0924 18:43:00.034686   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 18:43:00.034707   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:43:00.037567   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.037972   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.037994   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.038177   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.038367   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.038523   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.038654   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa Username:docker}
	I0924 18:43:00.116658   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 18:43:00.116731   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 18:43:00.138751   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 18:43:00.138812   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 18:43:00.160322   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 18:43:00.160404   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 18:43:00.182956   22837 provision.go:87] duration metric: took 519.836065ms to configureAuth
	I0924 18:43:00.182981   22837 buildroot.go:189] setting minikube options for container-runtime
	I0924 18:43:00.183174   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:43:00.183247   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:43:00.186012   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.186463   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.186490   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.186708   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.186905   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.187085   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.187211   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.187369   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:43:00.187586   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:43:00.187604   22837 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 18:43:00.387241   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 18:43:00.387266   22837 main.go:141] libmachine: Checking connection to Docker...
	I0924 18:43:00.387274   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetURL
	I0924 18:43:00.388619   22837 main.go:141] libmachine: (ha-685475-m03) DBG | Using libvirt version 6000000
	I0924 18:43:00.390883   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.391239   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.391267   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.391387   22837 main.go:141] libmachine: Docker is up and running!
	I0924 18:43:00.391407   22837 main.go:141] libmachine: Reticulating splines...
	I0924 18:43:00.391414   22837 client.go:171] duration metric: took 25.479397424s to LocalClient.Create
	I0924 18:43:00.391440   22837 start.go:167] duration metric: took 25.479470372s to libmachine.API.Create "ha-685475"
	I0924 18:43:00.391451   22837 start.go:293] postStartSetup for "ha-685475-m03" (driver="kvm2")
	I0924 18:43:00.391474   22837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:43:00.391492   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:43:00.391777   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:43:00.391810   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:43:00.393710   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.394015   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.394041   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.394165   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.394339   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.394452   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.394556   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa Username:docker}
	I0924 18:43:00.473009   22837 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 18:43:00.477004   22837 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 18:43:00.477028   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 18:43:00.477094   22837 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 18:43:00.477170   22837 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 18:43:00.477183   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /etc/ssl/certs/109492.pem
	I0924 18:43:00.477284   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 18:43:00.486009   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:43:00.508200   22837 start.go:296] duration metric: took 116.732729ms for postStartSetup
	I0924 18:43:00.508250   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetConfigRaw
	I0924 18:43:00.508816   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetIP
	I0924 18:43:00.511555   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.511901   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.511930   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.512205   22837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:43:00.512420   22837 start.go:128] duration metric: took 25.618667241s to createHost
	I0924 18:43:00.512456   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:43:00.514675   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.515041   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.515063   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.515191   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.515334   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.515443   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.515542   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.515680   22837 main.go:141] libmachine: Using SSH client type: native
	I0924 18:43:00.515847   22837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0924 18:43:00.515859   22837 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 18:43:00.611172   22837 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727203380.591704428
	
	I0924 18:43:00.611192   22837 fix.go:216] guest clock: 1727203380.591704428
	I0924 18:43:00.611199   22837 fix.go:229] Guest: 2024-09-24 18:43:00.591704428 +0000 UTC Remote: 2024-09-24 18:43:00.512437538 +0000 UTC m=+144.926822798 (delta=79.26689ms)
	I0924 18:43:00.611227   22837 fix.go:200] guest clock delta is within tolerance: 79.26689ms
	I0924 18:43:00.611257   22837 start.go:83] releasing machines lock for "ha-685475-m03", held for 25.717628791s
	I0924 18:43:00.611280   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:43:00.611536   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetIP
	I0924 18:43:00.614210   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.614585   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.614613   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.617023   22837 out.go:177] * Found network options:
	I0924 18:43:00.618386   22837 out.go:177]   - NO_PROXY=192.168.39.7,192.168.39.17
	W0924 18:43:00.619538   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	W0924 18:43:00.619561   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 18:43:00.619572   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:43:00.620007   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:43:00.620146   22837 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:43:00.620209   22837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 18:43:00.620244   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	W0924 18:43:00.620303   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	W0924 18:43:00.620325   22837 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 18:43:00.620388   22837 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 18:43:00.620402   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:43:00.622880   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.623148   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.623312   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.623338   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.623544   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:00.623554   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.623575   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:00.623757   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:43:00.623767   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.623887   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:43:00.623954   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.624007   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:43:00.624095   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa Username:docker}
	I0924 18:43:00.624139   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa Username:docker}
	I0924 18:43:00.854971   22837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 18:43:00.860491   22837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 18:43:00.860570   22837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:43:00.875041   22837 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 18:43:00.875064   22837 start.go:495] detecting cgroup driver to use...
	I0924 18:43:00.875138   22837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 18:43:00.890952   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 18:43:00.903982   22837 docker.go:217] disabling cri-docker service (if available) ...
	I0924 18:43:00.904031   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 18:43:00.917362   22837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 18:43:00.932669   22837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 18:43:01.042282   22837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 18:43:01.188592   22837 docker.go:233] disabling docker service ...
	I0924 18:43:01.188652   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 18:43:01.202602   22837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 18:43:01.214596   22837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 18:43:01.362941   22837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 18:43:01.483096   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 18:43:01.496147   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:43:01.513707   22837 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 18:43:01.513773   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.523612   22837 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 18:43:01.523679   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.534669   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.544789   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.554357   22837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:43:01.564046   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.573589   22837 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.589268   22837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:43:01.599288   22837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:43:01.609178   22837 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 18:43:01.609244   22837 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 18:43:01.620961   22837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:43:01.629927   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:43:01.745962   22837 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 18:43:01.839298   22837 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 18:43:01.839385   22837 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 18:43:01.843960   22837 start.go:563] Will wait 60s for crictl version
	I0924 18:43:01.844013   22837 ssh_runner.go:195] Run: which crictl
	I0924 18:43:01.847394   22837 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 18:43:01.883086   22837 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 18:43:01.883173   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:43:01.910912   22837 ssh_runner.go:195] Run: crio --version
	I0924 18:43:01.939648   22837 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 18:43:01.941115   22837 out.go:177]   - env NO_PROXY=192.168.39.7
	I0924 18:43:01.942322   22837 out.go:177]   - env NO_PROXY=192.168.39.7,192.168.39.17
	I0924 18:43:01.943445   22837 main.go:141] libmachine: (ha-685475-m03) Calling .GetIP
	I0924 18:43:01.945818   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:01.946123   22837 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:43:01.946145   22837 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:43:01.946354   22837 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 18:43:01.950271   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:43:01.961605   22837 mustload.go:65] Loading cluster: ha-685475
	I0924 18:43:01.961842   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:43:01.962136   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:43:01.962173   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:43:01.976744   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45917
	I0924 18:43:01.977209   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:43:01.977706   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:43:01.977723   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:43:01.978053   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:43:01.978214   22837 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:43:01.979876   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:43:01.980161   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:43:01.980194   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:43:01.994159   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42457
	I0924 18:43:01.994450   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:43:01.994902   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:43:01.994924   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:43:01.995194   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:43:01.995386   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:43:01.995533   22837 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475 for IP: 192.168.39.84
	I0924 18:43:01.995545   22837 certs.go:194] generating shared ca certs ...
	I0924 18:43:01.995558   22837 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:43:01.995697   22837 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 18:43:01.995733   22837 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 18:43:01.995744   22837 certs.go:256] generating profile certs ...
	I0924 18:43:01.995811   22837 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key
	I0924 18:43:01.995834   22837 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.f075a721
	I0924 18:43:01.995847   22837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.f075a721 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.17 192.168.39.84 192.168.39.254]
	I0924 18:43:02.322791   22837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.f075a721 ...
	I0924 18:43:02.322837   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.f075a721: {Name:mkebefefa2737490c508c384151059616130ea10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:43:02.323013   22837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.f075a721 ...
	I0924 18:43:02.323026   22837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.f075a721: {Name:mk784db272b18b5ad01513b873f3e2d227a52a52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:43:02.323095   22837 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.f075a721 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt
	I0924 18:43:02.323227   22837 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.f075a721 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key
	I0924 18:43:02.323344   22837 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key
	I0924 18:43:02.323364   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 18:43:02.323377   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 18:43:02.323390   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 18:43:02.323403   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 18:43:02.323415   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 18:43:02.323427   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 18:43:02.323438   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 18:43:02.338931   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 18:43:02.339017   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 18:43:02.339066   22837 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 18:43:02.339077   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 18:43:02.339099   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 18:43:02.339124   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:43:02.339155   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 18:43:02.339192   22837 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:43:02.339227   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /usr/share/ca-certificates/109492.pem
	I0924 18:43:02.339248   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:43:02.339262   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem -> /usr/share/ca-certificates/10949.pem
	I0924 18:43:02.339300   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:43:02.342163   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:43:02.342483   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:43:02.342502   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:43:02.342764   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:43:02.342966   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:43:02.343115   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:43:02.343267   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:43:02.415201   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0924 18:43:02.420165   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0924 18:43:02.429856   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0924 18:43:02.433796   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0924 18:43:02.444492   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0924 18:43:02.448439   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0924 18:43:02.457436   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0924 18:43:02.461533   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0924 18:43:02.470598   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0924 18:43:02.474412   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0924 18:43:02.483836   22837 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0924 18:43:02.487823   22837 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0924 18:43:02.497111   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:43:02.521054   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 18:43:02.543456   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:43:02.568215   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 18:43:02.592612   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0924 18:43:02.615696   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 18:43:02.644606   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:43:02.666219   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 18:43:02.687592   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 18:43:02.709023   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:43:02.730055   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 18:43:02.751785   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0924 18:43:02.766876   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0924 18:43:02.781877   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0924 18:43:02.801467   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0924 18:43:02.818674   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0924 18:43:02.833922   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0924 18:43:02.850197   22837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0924 18:43:02.867351   22837 ssh_runner.go:195] Run: openssl version
	I0924 18:43:02.872885   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 18:43:02.883212   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 18:43:02.887607   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 18:43:02.887666   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 18:43:02.893210   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 18:43:02.903216   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:43:02.913130   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:43:02.917524   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:43:02.917603   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:43:02.922951   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:43:02.932615   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 18:43:02.942684   22837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 18:43:02.946739   22837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 18:43:02.946793   22837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 18:43:02.952018   22837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 18:43:02.962341   22837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:43:02.965981   22837 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 18:43:02.966043   22837 kubeadm.go:934] updating node {m03 192.168.39.84 8443 v1.31.1 crio true true} ...
	I0924 18:43:02.966160   22837 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-685475-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 18:43:02.966192   22837 kube-vip.go:115] generating kube-vip config ...
	I0924 18:43:02.966222   22837 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 18:43:02.981139   22837 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 18:43:02.981202   22837 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0924 18:43:02.981266   22837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:43:02.990568   22837 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0924 18:43:02.990634   22837 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0924 18:43:02.999175   22837 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0924 18:43:02.999208   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 18:43:02.999266   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 18:43:02.999178   22837 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0924 18:43:02.999349   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 18:43:02.999180   22837 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0924 18:43:02.999391   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:43:02.999394   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 18:43:03.003117   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0924 18:43:03.003143   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0924 18:43:03.036084   22837 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 18:43:03.036114   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0924 18:43:03.036142   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0924 18:43:03.036201   22837 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 18:43:03.075645   22837 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0924 18:43:03.075686   22837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0924 18:43:03.823364   22837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0924 18:43:03.832908   22837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0924 18:43:03.848931   22837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:43:03.864946   22837 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0924 18:43:03.881201   22837 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 18:43:03.885272   22837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 18:43:03.896591   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:43:04.021336   22837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:43:04.039285   22837 host.go:66] Checking if "ha-685475" exists ...
	I0924 18:43:04.039604   22837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:43:04.039646   22837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:43:04.055236   22837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41245
	I0924 18:43:04.055694   22837 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:43:04.056178   22837 main.go:141] libmachine: Using API Version  1
	I0924 18:43:04.056193   22837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:43:04.056537   22837 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:43:04.056733   22837 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:43:04.056878   22837 start.go:317] joinCluster: &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false
istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:43:04.057018   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0924 18:43:04.057041   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:43:04.059760   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:43:04.060326   22837 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:43:04.060356   22837 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:43:04.060505   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:43:04.060673   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:43:04.060817   22837 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:43:04.060972   22837 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:43:04.197827   22837 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:43:04.197878   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f605s0.ormwy1royddhsvvy --discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-685475-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443"
	I0924 18:43:25.103587   22837 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f605s0.ormwy1royddhsvvy --discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-685475-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443": (20.905680905s)
	I0924 18:43:25.103634   22837 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0924 18:43:25.704348   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-685475-m03 minikube.k8s.io/updated_at=2024_09_24T18_43_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=ha-685475 minikube.k8s.io/primary=false
	I0924 18:43:25.818601   22837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-685475-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0924 18:43:25.943482   22837 start.go:319] duration metric: took 21.886600064s to joinCluster
	I0924 18:43:25.943562   22837 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 18:43:25.943868   22837 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:43:25.945143   22837 out.go:177] * Verifying Kubernetes components...
	I0924 18:43:25.946900   22837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:43:26.202957   22837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:43:26.232194   22837 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:43:26.232534   22837 kapi.go:59] client config for ha-685475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key", CAFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0924 18:43:26.232613   22837 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.7:8443
	I0924 18:43:26.232964   22837 node_ready.go:35] waiting up to 6m0s for node "ha-685475-m03" to be "Ready" ...
	I0924 18:43:26.233091   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:26.233102   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:26.233113   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:26.233119   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:26.236798   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:26.733233   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:26.733259   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:26.733268   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:26.733273   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:26.736350   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:27.234119   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:27.234154   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:27.234165   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:27.234175   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:27.240637   22837 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 18:43:27.733351   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:27.733376   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:27.733387   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:27.733394   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:27.742949   22837 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0924 18:43:28.233173   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:28.233194   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:28.233202   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:28.233206   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:28.236224   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:28.237052   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:28.733360   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:28.733382   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:28.733394   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:28.733399   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:28.736288   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:29.233877   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:29.233916   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:29.233928   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:29.233933   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:29.239798   22837 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 18:43:29.733882   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:29.733906   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:29.733918   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:29.733925   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:29.738420   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:30.233669   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:30.233691   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:30.233699   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:30.233702   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:30.237023   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:30.237689   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:30.733690   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:30.733716   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:30.733726   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:30.733733   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:30.736562   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:31.233177   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:31.233204   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:31.233216   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:31.233221   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:31.237262   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:31.733331   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:31.733356   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:31.733368   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:31.733375   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:31.736291   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:32.234100   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:32.234122   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:32.234130   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:32.234134   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:32.237699   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:32.238691   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:32.734110   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:32.734139   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:32.734148   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:32.734156   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:32.737099   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:33.233554   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:33.233581   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:33.233597   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:33.233602   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:33.236923   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:33.733151   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:33.733173   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:33.733181   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:33.733186   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:33.736346   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:34.234015   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:34.234035   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:34.234045   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:34.234049   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:34.237241   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:34.734163   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:34.734184   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:34.734193   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:34.734196   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:34.737761   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:34.738342   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:35.234001   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:35.234024   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:35.234032   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:35.234036   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:35.237606   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:35.733696   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:35.733720   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:35.733730   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:35.733735   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:35.744612   22837 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0924 18:43:36.233198   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:36.233218   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:36.233226   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:36.233230   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:36.236903   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:36.734073   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:36.734097   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:36.734107   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:36.734113   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:36.737583   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:37.234135   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:37.234158   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:37.234166   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:37.234170   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:37.237414   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:37.238235   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:37.733447   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:37.733464   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:37.733472   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:37.733477   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:37.737157   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:38.233502   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:38.233528   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:38.233541   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:38.233550   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:38.236943   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:38.734024   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:38.734049   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:38.734061   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:38.734068   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:38.737560   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:39.233277   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:39.233299   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:39.233307   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:39.233313   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:39.238242   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:39.238885   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:39.733235   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:39.733259   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:39.733265   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:39.733269   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:39.736692   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:40.233260   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:40.233287   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:40.233300   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:40.233308   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:40.236543   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:40.733171   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:40.733195   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:40.733205   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:40.733212   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:40.740055   22837 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 18:43:41.233389   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:41.233414   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:41.233422   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:41.233428   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:41.238076   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:41.733867   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:41.733888   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:41.733896   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:41.733902   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:41.738641   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:41.739398   22837 node_ready.go:53] node "ha-685475-m03" has status "Ready":"False"
	I0924 18:43:42.233262   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:42.233290   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:42.233307   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:42.233314   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:42.236491   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:42.733416   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:42.733438   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:42.733445   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:42.733450   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:42.736799   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:43.233279   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:43.233299   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.233308   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.233312   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.238341   22837 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 18:43:43.238906   22837 node_ready.go:49] node "ha-685475-m03" has status "Ready":"True"
	I0924 18:43:43.238924   22837 node_ready.go:38] duration metric: took 17.005939201s for node "ha-685475-m03" to be "Ready" ...
	I0924 18:43:43.238932   22837 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:43:43.239003   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:43:43.239014   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.239021   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.239028   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.244370   22837 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 18:43:43.251285   22837 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.251369   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fchhl
	I0924 18:43:43.251380   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.251391   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.251397   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.254058   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.254668   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:43.254684   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.254696   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.254705   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.256747   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.257336   22837 pod_ready.go:93] pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:43.257356   22837 pod_ready.go:82] duration metric: took 6.045735ms for pod "coredns-7c65d6cfc9-fchhl" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.257366   22837 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.257424   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jf7wr
	I0924 18:43:43.257436   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.257446   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.257453   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.259853   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.260510   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:43.260535   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.260545   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.260560   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.262661   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.263075   22837 pod_ready.go:93] pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:43.263089   22837 pod_ready.go:82] duration metric: took 5.713062ms for pod "coredns-7c65d6cfc9-jf7wr" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.263099   22837 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.263153   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475
	I0924 18:43:43.263164   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.263173   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.263181   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.265421   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.266025   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:43.266041   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.266051   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.266056   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.268154   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.268655   22837 pod_ready.go:93] pod "etcd-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:43.268677   22837 pod_ready.go:82] duration metric: took 5.571952ms for pod "etcd-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.268686   22837 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.268729   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475-m02
	I0924 18:43:43.268736   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.268743   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.268748   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.270920   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.271534   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:43.271559   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.271569   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.271575   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.273706   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:43.274155   22837 pod_ready.go:93] pod "etcd-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:43.274174   22837 pod_ready.go:82] duration metric: took 5.482358ms for pod "etcd-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.274182   22837 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.433530   22837 request.go:632] Waited for 159.301092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475-m03
	I0924 18:43:43.433597   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685475-m03
	I0924 18:43:43.433607   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.433614   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.433620   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.436812   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:43.633686   22837 request.go:632] Waited for 196.323402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:43.633768   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:43.633775   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.633786   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.633789   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.636913   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:43.637664   22837 pod_ready.go:93] pod "etcd-ha-685475-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:43.637687   22837 pod_ready.go:82] duration metric: took 363.498352ms for pod "etcd-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.637711   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:43.833926   22837 request.go:632] Waited for 196.128909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475
	I0924 18:43:43.833999   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475
	I0924 18:43:43.834017   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:43.834032   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:43.834048   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:43.837007   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:44.033945   22837 request.go:632] Waited for 196.25ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:44.033995   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:44.034000   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:44.034007   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:44.034013   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:44.037183   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:44.037998   22837 pod_ready.go:93] pod "kube-apiserver-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:44.038015   22837 pod_ready.go:82] duration metric: took 400.293259ms for pod "kube-apiserver-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:44.038024   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:44.233670   22837 request.go:632] Waited for 195.573608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m02
	I0924 18:43:44.233746   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m02
	I0924 18:43:44.233751   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:44.233759   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:44.233770   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:44.236800   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:44.434104   22837 request.go:632] Waited for 196.353101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:44.434150   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:44.434155   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:44.434162   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:44.434166   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:44.437459   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:44.438061   22837 pod_ready.go:93] pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:44.438077   22837 pod_ready.go:82] duration metric: took 400.046958ms for pod "kube-apiserver-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:44.438087   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:44.634247   22837 request.go:632] Waited for 196.068994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m03
	I0924 18:43:44.634307   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685475-m03
	I0924 18:43:44.634314   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:44.634323   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:44.634333   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:44.637761   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:44.834009   22837 request.go:632] Waited for 195.341273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:44.834062   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:44.834067   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:44.834075   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:44.834079   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:44.837377   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:44.838102   22837 pod_ready.go:93] pod "kube-apiserver-ha-685475-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:44.838124   22837 pod_ready.go:82] duration metric: took 400.029506ms for pod "kube-apiserver-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:44.838137   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:45.033524   22837 request.go:632] Waited for 195.317742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475
	I0924 18:43:45.033577   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475
	I0924 18:43:45.033583   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:45.033597   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:45.033602   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:45.038542   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:45.233396   22837 request.go:632] Waited for 194.275856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:45.233476   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:45.233483   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:45.233494   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:45.233499   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:45.237836   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:45.238292   22837 pod_ready.go:93] pod "kube-controller-manager-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:45.238309   22837 pod_ready.go:82] duration metric: took 400.16501ms for pod "kube-controller-manager-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:45.238319   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:45.434068   22837 request.go:632] Waited for 195.691023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m02
	I0924 18:43:45.434126   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m02
	I0924 18:43:45.434131   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:45.434138   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:45.434142   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:45.437774   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:45.634002   22837 request.go:632] Waited for 195.223479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:45.634063   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:45.634070   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:45.634080   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:45.634086   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:45.637445   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:45.638048   22837 pod_ready.go:93] pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:45.638072   22837 pod_ready.go:82] duration metric: took 399.746216ms for pod "kube-controller-manager-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:45.638086   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:45.833552   22837 request.go:632] Waited for 195.400527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m03
	I0924 18:43:45.833619   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685475-m03
	I0924 18:43:45.833626   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:45.833637   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:45.833645   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:45.837253   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:46.033410   22837 request.go:632] Waited for 195.28753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:46.033466   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:46.033471   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:46.033479   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:46.033484   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:46.036819   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:46.037577   22837 pod_ready.go:93] pod "kube-controller-manager-ha-685475-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:46.037601   22837 pod_ready.go:82] duration metric: took 399.507145ms for pod "kube-controller-manager-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:46.037614   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b8x2w" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:46.233664   22837 request.go:632] Waited for 195.987183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8x2w
	I0924 18:43:46.233730   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8x2w
	I0924 18:43:46.233736   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:46.233744   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:46.233751   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:46.236704   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:46.433753   22837 request.go:632] Waited for 196.36056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:46.433836   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:46.433849   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:46.433858   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:46.433864   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:46.436885   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:46.437346   22837 pod_ready.go:93] pod "kube-proxy-b8x2w" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:46.437362   22837 pod_ready.go:82] duration metric: took 399.741929ms for pod "kube-proxy-b8x2w" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:46.437371   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dlr8f" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:46.633383   22837 request.go:632] Waited for 195.935746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlr8f
	I0924 18:43:46.633452   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlr8f
	I0924 18:43:46.633459   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:46.633467   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:46.633472   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:46.636654   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:46.833848   22837 request.go:632] Waited for 196.369969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:46.833916   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:46.833926   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:46.833936   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:46.833944   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:46.836871   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:46.837369   22837 pod_ready.go:93] pod "kube-proxy-dlr8f" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:46.837390   22837 pod_ready.go:82] duration metric: took 400.012248ms for pod "kube-proxy-dlr8f" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:46.837402   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mzlfj" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:47.033325   22837 request.go:632] Waited for 195.841602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mzlfj
	I0924 18:43:47.033432   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mzlfj
	I0924 18:43:47.033444   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:47.033452   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:47.033455   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:47.037080   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:47.234175   22837 request.go:632] Waited for 196.377747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:47.234251   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:47.234257   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:47.234266   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:47.234278   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:47.238255   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:47.238898   22837 pod_ready.go:93] pod "kube-proxy-mzlfj" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:47.238919   22837 pod_ready.go:82] duration metric: took 401.508549ms for pod "kube-proxy-mzlfj" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:47.238933   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:47.433952   22837 request.go:632] Waited for 194.91975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475
	I0924 18:43:47.434033   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475
	I0924 18:43:47.434044   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:47.434055   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:47.434064   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:47.437332   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:47.633347   22837 request.go:632] Waited for 195.287392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:47.633423   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475
	I0924 18:43:47.633433   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:47.633441   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:47.633445   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:47.636933   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:47.637777   22837 pod_ready.go:93] pod "kube-scheduler-ha-685475" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:47.637815   22837 pod_ready.go:82] duration metric: took 398.871168ms for pod "kube-scheduler-ha-685475" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:47.637829   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:47.834176   22837 request.go:632] Waited for 196.271361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m02
	I0924 18:43:47.834232   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m02
	I0924 18:43:47.834238   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:47.834246   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:47.834250   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:47.836928   22837 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 18:43:48.033993   22837 request.go:632] Waited for 196.330346ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:48.034058   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m02
	I0924 18:43:48.034064   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.034074   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.034084   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.037490   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:48.038369   22837 pod_ready.go:93] pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:48.038391   22837 pod_ready.go:82] duration metric: took 400.547551ms for pod "kube-scheduler-ha-685475-m02" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:48.038404   22837 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:48.233397   22837 request.go:632] Waited for 194.929707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m03
	I0924 18:43:48.233454   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685475-m03
	I0924 18:43:48.233459   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.233467   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.233471   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.236987   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:48.433994   22837 request.go:632] Waited for 196.397643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:48.434055   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes/ha-685475-m03
	I0924 18:43:48.434062   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.434073   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.434081   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.437996   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:48.438514   22837 pod_ready.go:93] pod "kube-scheduler-ha-685475-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 18:43:48.438617   22837 pod_ready.go:82] duration metric: took 400.123712ms for pod "kube-scheduler-ha-685475-m03" in "kube-system" namespace to be "Ready" ...
	I0924 18:43:48.438680   22837 pod_ready.go:39] duration metric: took 5.199733297s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:43:48.438705   22837 api_server.go:52] waiting for apiserver process to appear ...
	I0924 18:43:48.438774   22837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:43:48.452044   22837 api_server.go:72] duration metric: took 22.508447307s to wait for apiserver process to appear ...
	I0924 18:43:48.452066   22837 api_server.go:88] waiting for apiserver healthz status ...
	I0924 18:43:48.452082   22837 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0924 18:43:48.457867   22837 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0924 18:43:48.457929   22837 round_trippers.go:463] GET https://192.168.39.7:8443/version
	I0924 18:43:48.457937   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.457945   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.457950   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.458795   22837 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0924 18:43:48.458877   22837 api_server.go:141] control plane version: v1.31.1
	I0924 18:43:48.458893   22837 api_server.go:131] duration metric: took 6.820487ms to wait for apiserver health ...
	I0924 18:43:48.458900   22837 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 18:43:48.634297   22837 request.go:632] Waited for 175.332984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:43:48.634358   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:43:48.634374   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.634381   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.634385   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.640434   22837 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 18:43:48.648701   22837 system_pods.go:59] 24 kube-system pods found
	I0924 18:43:48.648727   22837 system_pods.go:61] "coredns-7c65d6cfc9-fchhl" [dc58fefc-6210-4b70-bd0d-dbf5b093e09a] Running
	I0924 18:43:48.648734   22837 system_pods.go:61] "coredns-7c65d6cfc9-jf7wr" [a616493e-082e-4ae6-8e12-8c4a2b37a985] Running
	I0924 18:43:48.648739   22837 system_pods.go:61] "etcd-ha-685475" [f76413e6-46f1-4914-9ba4-719c8f2b098b] Running
	I0924 18:43:48.648744   22837 system_pods.go:61] "etcd-ha-685475-m02" [f37ad824-aa9c-42e9-b9fa-82423aab2a30] Running
	I0924 18:43:48.648749   22837 system_pods.go:61] "etcd-ha-685475-m03" [aa636f08-f0af-4453-b8fd-2637f9edce98] Running
	I0924 18:43:48.648753   22837 system_pods.go:61] "kindnet-7w5dn" [dc2e3477-1c01-4af2-a8b5-0433c75dc3d1] Running
	I0924 18:43:48.648758   22837 system_pods.go:61] "kindnet-ms6qb" [60485f55-3830-4897-b38e-55779662b999] Running
	I0924 18:43:48.648764   22837 system_pods.go:61] "kindnet-pwvfj" [e47e9124-c023-41f2-8b05-5fde3cf09dc1] Running
	I0924 18:43:48.648769   22837 system_pods.go:61] "kube-apiserver-ha-685475" [f7dc1ef7-fba6-48c4-8868-de5eccdbbea3] Running
	I0924 18:43:48.648778   22837 system_pods.go:61] "kube-apiserver-ha-685475-m02" [96b5dd69-0cc4-42d9-a42e-b1665ab1890a] Running
	I0924 18:43:48.648786   22837 system_pods.go:61] "kube-apiserver-ha-685475-m03" [f6efa935-e9a5-4f21-8c4c-571bbe7ab65d] Running
	I0924 18:43:48.648794   22837 system_pods.go:61] "kube-controller-manager-ha-685475" [3d40caef-e1c5-4e4b-9908-cf2767bb686f] Running
	I0924 18:43:48.648799   22837 system_pods.go:61] "kube-controller-manager-ha-685475-m02" [0fb0ca36-0340-49f7-8c5d-acf933c181ad] Running
	I0924 18:43:48.648804   22837 system_pods.go:61] "kube-controller-manager-ha-685475-m03" [0a1e0dac-494b-4892-b945-bf45d87baa4d] Running
	I0924 18:43:48.648810   22837 system_pods.go:61] "kube-proxy-b8x2w" [95e65f4e-7461-479a-8743-ce4f891abfcf] Running
	I0924 18:43:48.648818   22837 system_pods.go:61] "kube-proxy-dlr8f" [e463fdb8-b27f-4e4a-8887-6534c92a21aa] Running
	I0924 18:43:48.648824   22837 system_pods.go:61] "kube-proxy-mzlfj" [2fcf9e88-63de-45cc-b82a-87f1589f9565] Running
	I0924 18:43:48.648829   22837 system_pods.go:61] "kube-scheduler-ha-685475" [b82f1f3f-4c7a-49b3-9dab-ba6dfdd3c2ed] Running
	I0924 18:43:48.648835   22837 system_pods.go:61] "kube-scheduler-ha-685475-m02" [53e1a4b3-4e3a-4d14-9cdf-eedbf83877b4] Running
	I0924 18:43:48.648848   22837 system_pods.go:61] "kube-scheduler-ha-685475-m03" [eee036e1-933e-42d1-9b3d-63f6f13ac6a3] Running
	I0924 18:43:48.648855   22837 system_pods.go:61] "kube-vip-ha-685475" [ad2ed915-5276-4ba2-b097-df9074e8c2ef] Running
	I0924 18:43:48.648860   22837 system_pods.go:61] "kube-vip-ha-685475-m02" [916f0d4d-70d4-4347-9337-84e5c77ca834] Running
	I0924 18:43:48.648867   22837 system_pods.go:61] "kube-vip-ha-685475-m03" [a7e9d21c-45e2-4bcf-9e84-6c2c351d2f68] Running
	I0924 18:43:48.648873   22837 system_pods.go:61] "storage-provisioner" [e0f5497a-ae6d-4051-b1bc-c84c91d0fd12] Running
	I0924 18:43:48.648881   22837 system_pods.go:74] duration metric: took 189.974541ms to wait for pod list to return data ...
	I0924 18:43:48.648894   22837 default_sa.go:34] waiting for default service account to be created ...
	I0924 18:43:48.834315   22837 request.go:632] Waited for 185.353374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0924 18:43:48.834369   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0924 18:43:48.834374   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:48.834382   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:48.834385   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:48.838136   22837 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 18:43:48.838236   22837 default_sa.go:45] found service account: "default"
	I0924 18:43:48.838249   22837 default_sa.go:55] duration metric: took 189.347233ms for default service account to be created ...
	I0924 18:43:48.838257   22837 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 18:43:49.033856   22837 request.go:632] Waited for 195.536486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:43:49.033925   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0924 18:43:49.033930   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:49.033939   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:49.033944   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:49.040875   22837 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 18:43:49.047492   22837 system_pods.go:86] 24 kube-system pods found
	I0924 18:43:49.047517   22837 system_pods.go:89] "coredns-7c65d6cfc9-fchhl" [dc58fefc-6210-4b70-bd0d-dbf5b093e09a] Running
	I0924 18:43:49.047522   22837 system_pods.go:89] "coredns-7c65d6cfc9-jf7wr" [a616493e-082e-4ae6-8e12-8c4a2b37a985] Running
	I0924 18:43:49.047526   22837 system_pods.go:89] "etcd-ha-685475" [f76413e6-46f1-4914-9ba4-719c8f2b098b] Running
	I0924 18:43:49.047531   22837 system_pods.go:89] "etcd-ha-685475-m02" [f37ad824-aa9c-42e9-b9fa-82423aab2a30] Running
	I0924 18:43:49.047535   22837 system_pods.go:89] "etcd-ha-685475-m03" [aa636f08-f0af-4453-b8fd-2637f9edce98] Running
	I0924 18:43:49.047538   22837 system_pods.go:89] "kindnet-7w5dn" [dc2e3477-1c01-4af2-a8b5-0433c75dc3d1] Running
	I0924 18:43:49.047541   22837 system_pods.go:89] "kindnet-ms6qb" [60485f55-3830-4897-b38e-55779662b999] Running
	I0924 18:43:49.047544   22837 system_pods.go:89] "kindnet-pwvfj" [e47e9124-c023-41f2-8b05-5fde3cf09dc1] Running
	I0924 18:43:49.047549   22837 system_pods.go:89] "kube-apiserver-ha-685475" [f7dc1ef7-fba6-48c4-8868-de5eccdbbea3] Running
	I0924 18:43:49.047553   22837 system_pods.go:89] "kube-apiserver-ha-685475-m02" [96b5dd69-0cc4-42d9-a42e-b1665ab1890a] Running
	I0924 18:43:49.047556   22837 system_pods.go:89] "kube-apiserver-ha-685475-m03" [f6efa935-e9a5-4f21-8c4c-571bbe7ab65d] Running
	I0924 18:43:49.047560   22837 system_pods.go:89] "kube-controller-manager-ha-685475" [3d40caef-e1c5-4e4b-9908-cf2767bb686f] Running
	I0924 18:43:49.047563   22837 system_pods.go:89] "kube-controller-manager-ha-685475-m02" [0fb0ca36-0340-49f7-8c5d-acf933c181ad] Running
	I0924 18:43:49.047567   22837 system_pods.go:89] "kube-controller-manager-ha-685475-m03" [0a1e0dac-494b-4892-b945-bf45d87baa4d] Running
	I0924 18:43:49.047570   22837 system_pods.go:89] "kube-proxy-b8x2w" [95e65f4e-7461-479a-8743-ce4f891abfcf] Running
	I0924 18:43:49.047574   22837 system_pods.go:89] "kube-proxy-dlr8f" [e463fdb8-b27f-4e4a-8887-6534c92a21aa] Running
	I0924 18:43:49.047577   22837 system_pods.go:89] "kube-proxy-mzlfj" [2fcf9e88-63de-45cc-b82a-87f1589f9565] Running
	I0924 18:43:49.047580   22837 system_pods.go:89] "kube-scheduler-ha-685475" [b82f1f3f-4c7a-49b3-9dab-ba6dfdd3c2ed] Running
	I0924 18:43:49.047583   22837 system_pods.go:89] "kube-scheduler-ha-685475-m02" [53e1a4b3-4e3a-4d14-9cdf-eedbf83877b4] Running
	I0924 18:43:49.047586   22837 system_pods.go:89] "kube-scheduler-ha-685475-m03" [eee036e1-933e-42d1-9b3d-63f6f13ac6a3] Running
	I0924 18:43:49.047589   22837 system_pods.go:89] "kube-vip-ha-685475" [ad2ed915-5276-4ba2-b097-df9074e8c2ef] Running
	I0924 18:43:49.047591   22837 system_pods.go:89] "kube-vip-ha-685475-m02" [916f0d4d-70d4-4347-9337-84e5c77ca834] Running
	I0924 18:43:49.047594   22837 system_pods.go:89] "kube-vip-ha-685475-m03" [a7e9d21c-45e2-4bcf-9e84-6c2c351d2f68] Running
	I0924 18:43:49.047597   22837 system_pods.go:89] "storage-provisioner" [e0f5497a-ae6d-4051-b1bc-c84c91d0fd12] Running
	I0924 18:43:49.047603   22837 system_pods.go:126] duration metric: took 209.341697ms to wait for k8s-apps to be running ...
	I0924 18:43:49.047611   22837 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 18:43:49.047657   22837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:43:49.065856   22837 system_svc.go:56] duration metric: took 18.234674ms WaitForService to wait for kubelet
	I0924 18:43:49.065885   22837 kubeadm.go:582] duration metric: took 23.12228905s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:43:49.065905   22837 node_conditions.go:102] verifying NodePressure condition ...
	I0924 18:43:49.234361   22837 request.go:632] Waited for 168.355831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes
	I0924 18:43:49.234409   22837 round_trippers.go:463] GET https://192.168.39.7:8443/api/v1/nodes
	I0924 18:43:49.234415   22837 round_trippers.go:469] Request Headers:
	I0924 18:43:49.234422   22837 round_trippers.go:473]     Accept: application/json, */*
	I0924 18:43:49.234427   22837 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 18:43:49.238548   22837 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 18:43:49.242121   22837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:43:49.242144   22837 node_conditions.go:123] node cpu capacity is 2
	I0924 18:43:49.242160   22837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:43:49.242164   22837 node_conditions.go:123] node cpu capacity is 2
	I0924 18:43:49.242167   22837 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 18:43:49.242170   22837 node_conditions.go:123] node cpu capacity is 2
	I0924 18:43:49.242174   22837 node_conditions.go:105] duration metric: took 176.264509ms to run NodePressure ...
	I0924 18:43:49.242184   22837 start.go:241] waiting for startup goroutines ...
	I0924 18:43:49.242210   22837 start.go:255] writing updated cluster config ...
	I0924 18:43:49.242507   22837 ssh_runner.go:195] Run: rm -f paused
	I0924 18:43:49.294738   22837 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 18:43:49.297711   22837 out.go:177] * Done! kubectl is now configured to use "ha-685475" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.921446830Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203653921426834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73d27bb0-22d1-4860-bf22-b67268d30dfd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.921997349Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7a8c3e6-6875-48cd-b0f4-4e57ab7e4a04 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.922060764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7a8c3e6-6875-48cd-b0f4-4e57ab7e4a04 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.922362577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203432776977765,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101ffaf02677078c4490807a7a38b8b8077a8323b00e1ef6c7c52dfdf7c323e,PodSandboxId:5cb07ffbc15c1db48161a46e1ce4a69e3d024a8ff62c886643723089f33e75f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203297582724205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297608075068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297571303610,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-62
10-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17272032
85606256261,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203285407492796,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f5664db9017d6a2a0453e30fcd1e13eb349124974c1e07a2d0ba8f50e4c50a,PodSandboxId:8b6709d2b9d03b71e71df6dad09e42d52601e38a0e0ee46ecd31f5480fd75d19,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203275865927706,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b1b3e358bc7b86c05e843e83024d248,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203273109777744,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203273059329172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8,PodSandboxId:480a4fc4d507ff4484472442542def1cc671c1320151a75812f1b0b2d858bf48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203273017878673,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad,PodSandboxId:2ee65b29ae3d23587d2aa4aad308fca9a43ac64a3c3c891ebb43fab609b64f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203272969975750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7a8c3e6-6875-48cd-b0f4-4e57ab7e4a04 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.958299246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4fa7aa6d-335e-4b33-8abb-83704770abc2 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.958384792Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4fa7aa6d-335e-4b33-8abb-83704770abc2 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.959257553Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15c3fffa-ee29-43b0-b0f3-d2096e547e13 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.959645596Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203653959626242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15c3fffa-ee29-43b0-b0f3-d2096e547e13 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.960126480Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6dbfd705-3fa0-4cad-9dab-e07ca2de78af name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.960193277Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6dbfd705-3fa0-4cad-9dab-e07ca2de78af name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.960403908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203432776977765,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101ffaf02677078c4490807a7a38b8b8077a8323b00e1ef6c7c52dfdf7c323e,PodSandboxId:5cb07ffbc15c1db48161a46e1ce4a69e3d024a8ff62c886643723089f33e75f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203297582724205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297608075068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297571303610,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-62
10-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17272032
85606256261,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203285407492796,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f5664db9017d6a2a0453e30fcd1e13eb349124974c1e07a2d0ba8f50e4c50a,PodSandboxId:8b6709d2b9d03b71e71df6dad09e42d52601e38a0e0ee46ecd31f5480fd75d19,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203275865927706,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b1b3e358bc7b86c05e843e83024d248,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203273109777744,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203273059329172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8,PodSandboxId:480a4fc4d507ff4484472442542def1cc671c1320151a75812f1b0b2d858bf48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203273017878673,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad,PodSandboxId:2ee65b29ae3d23587d2aa4aad308fca9a43ac64a3c3c891ebb43fab609b64f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203272969975750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6dbfd705-3fa0-4cad-9dab-e07ca2de78af name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.993884521Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0309163e-214a-4684-9d87-ca014793269d name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.993963115Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0309163e-214a-4684-9d87-ca014793269d name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.995314109Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb476705-a68d-4248-8b96-a78fb8db1bfe name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.995741738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203653995719273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb476705-a68d-4248-8b96-a78fb8db1bfe name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.996276082Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=694ce49e-a77c-46b4-915c-682f7213be3a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.996339516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=694ce49e-a77c-46b4-915c-682f7213be3a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:33 ha-685475 crio[662]: time="2024-09-24 18:47:33.996561907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203432776977765,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101ffaf02677078c4490807a7a38b8b8077a8323b00e1ef6c7c52dfdf7c323e,PodSandboxId:5cb07ffbc15c1db48161a46e1ce4a69e3d024a8ff62c886643723089f33e75f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203297582724205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297608075068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297571303610,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-62
10-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17272032
85606256261,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203285407492796,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f5664db9017d6a2a0453e30fcd1e13eb349124974c1e07a2d0ba8f50e4c50a,PodSandboxId:8b6709d2b9d03b71e71df6dad09e42d52601e38a0e0ee46ecd31f5480fd75d19,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203275865927706,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b1b3e358bc7b86c05e843e83024d248,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203273109777744,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203273059329172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8,PodSandboxId:480a4fc4d507ff4484472442542def1cc671c1320151a75812f1b0b2d858bf48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203273017878673,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad,PodSandboxId:2ee65b29ae3d23587d2aa4aad308fca9a43ac64a3c3c891ebb43fab609b64f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203272969975750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=694ce49e-a77c-46b4-915c-682f7213be3a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:34 ha-685475 crio[662]: time="2024-09-24 18:47:34.030466287Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dec3300a-77b1-401d-ad98-0e2b919e3941 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:34 ha-685475 crio[662]: time="2024-09-24 18:47:34.030551595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dec3300a-77b1-401d-ad98-0e2b919e3941 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:47:34 ha-685475 crio[662]: time="2024-09-24 18:47:34.031639964Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ac84aa6-0a0f-45b3-82dd-16532642e3fb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:34 ha-685475 crio[662]: time="2024-09-24 18:47:34.032120954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203654032097696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ac84aa6-0a0f-45b3-82dd-16532642e3fb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:47:34 ha-685475 crio[662]: time="2024-09-24 18:47:34.032658297Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5fc413f-a6d4-489a-8fdf-3dee2e10e904 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:34 ha-685475 crio[662]: time="2024-09-24 18:47:34.032720815Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5fc413f-a6d4-489a-8fdf-3dee2e10e904 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:47:34 ha-685475 crio[662]: time="2024-09-24 18:47:34.033046784Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203432776977765,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7101ffaf02677078c4490807a7a38b8b8077a8323b00e1ef6c7c52dfdf7c323e,PodSandboxId:5cb07ffbc15c1db48161a46e1ce4a69e3d024a8ff62c886643723089f33e75f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203297582724205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297608075068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203297571303610,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-62
10-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17272032
85606256261,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203285407492796,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f5664db9017d6a2a0453e30fcd1e13eb349124974c1e07a2d0ba8f50e4c50a,PodSandboxId:8b6709d2b9d03b71e71df6dad09e42d52601e38a0e0ee46ecd31f5480fd75d19,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203275865927706,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b1b3e358bc7b86c05e843e83024d248,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203273109777744,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203273059329172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8,PodSandboxId:480a4fc4d507ff4484472442542def1cc671c1320151a75812f1b0b2d858bf48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203273017878673,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad,PodSandboxId:2ee65b29ae3d23587d2aa4aad308fca9a43ac64a3c3c891ebb43fab609b64f7a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203272969975750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5fc413f-a6d4-489a-8fdf-3dee2e10e904 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9b86d48937d84       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2517ecd8d61cd       busybox-7dff88458-hmkfk
	2c7b4241a9158       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   c2c9f0a12f919       coredns-7c65d6cfc9-jf7wr
	7101ffaf02677       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   5cb07ffbc15c1       storage-provisioner
	75aac96a2239b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   9f53b2b4e4e29       coredns-7c65d6cfc9-fchhl
	709da73468c82       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   6c65efd736505       kindnet-ms6qb
	9ea87ecceac1c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   bbb4cec818818       kube-proxy-b8x2w
	40f5664db9017       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   8b6709d2b9d03       kube-vip-ha-685475
	e62a02dab3075       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   9ade6d826e125       kube-scheduler-ha-685475
	efe5b6f3ceb69       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   5fa1209cd75b8       etcd-ha-685475
	5686da29f7aac       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   480a4fc4d507f       kube-controller-manager-ha-685475
	838b3cda70bf1       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   2ee65b29ae3d2       kube-apiserver-ha-685475
	
	
	==> coredns [2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235] <==
	[INFO] 10.244.2.2:43478 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117921s
	[INFO] 10.244.0.4:52601 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001246s
	[INFO] 10.244.0.4:57647 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118972s
	[INFO] 10.244.0.4:59286 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001434237s
	[INFO] 10.244.0.4:55987 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082081s
	[INFO] 10.244.1.2:44949 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002196411s
	[INFO] 10.244.1.2:57646 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132442s
	[INFO] 10.244.1.2:45986 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001533759s
	[INFO] 10.244.1.2:56859 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159221s
	[INFO] 10.244.1.2:47730 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122802s
	[INFO] 10.244.2.2:49373 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174893s
	[INFO] 10.244.0.4:52492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008787s
	[INFO] 10.244.0.4:33570 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049583s
	[INFO] 10.244.0.4:35717 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000036153s
	[INFO] 10.244.1.2:39348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000262289s
	[INFO] 10.244.1.2:44144 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000216176s
	[INFO] 10.244.1.2:37532 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00017928s
	[INFO] 10.244.2.2:34536 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139562s
	[INFO] 10.244.0.4:43378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108735s
	[INFO] 10.244.0.4:50975 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139299s
	[INFO] 10.244.0.4:36798 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091581s
	[INFO] 10.244.1.2:55450 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136524s
	[INFO] 10.244.1.2:46887 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00019253s
	[INFO] 10.244.1.2:39275 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113225s
	[INFO] 10.244.1.2:44182 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097101s
	
	
	==> coredns [75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f] <==
	[INFO] 10.244.2.2:51539 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.04751056s
	[INFO] 10.244.2.2:56073 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013178352s
	[INFO] 10.244.0.4:46583 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000099115s
	[INFO] 10.244.1.2:39503 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018791s
	[INFO] 10.244.1.2:56200 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000107364s
	[INFO] 10.244.1.2:50181 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000477328s
	[INFO] 10.244.2.2:48517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149349s
	[INFO] 10.244.2.2:37426 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161156s
	[INFO] 10.244.2.2:51780 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245454s
	[INFO] 10.244.0.4:37360 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00192766s
	[INFO] 10.244.0.4:49282 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067708s
	[INFO] 10.244.0.4:50475 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000049077s
	[INFO] 10.244.0.4:42734 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103381s
	[INFO] 10.244.1.2:34090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126966s
	[INFO] 10.244.1.2:49474 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199973s
	[INFO] 10.244.1.2:47488 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080517s
	[INFO] 10.244.2.2:58501 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129358s
	[INFO] 10.244.2.2:35831 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166592s
	[INFO] 10.244.2.2:46260 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105019s
	[INFO] 10.244.0.4:34512 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070631s
	[INFO] 10.244.1.2:40219 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095437s
	[INFO] 10.244.2.2:45584 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000263954s
	[INFO] 10.244.2.2:45346 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105804s
	[INFO] 10.244.2.2:33451 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099783s
	[INFO] 10.244.0.4:54263 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102026s
	
	
	==> describe nodes <==
	Name:               ha-685475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T18_41_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:41:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:47:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:44:23 +0000   Tue, 24 Sep 2024 18:41:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:44:23 +0000   Tue, 24 Sep 2024 18:41:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:44:23 +0000   Tue, 24 Sep 2024 18:41:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:44:23 +0000   Tue, 24 Sep 2024 18:41:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-685475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d6728db94ca4a90af6f3c76683b52c2
	  System UUID:                7d6728db-94ca-4a90-af6f-3c76683b52c2
	  Boot ID:                    d6338982-1afe-44d6-a104-48e80df984ae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hmkfk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 coredns-7c65d6cfc9-fchhl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 coredns-7c65d6cfc9-jf7wr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 etcd-ha-685475                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m17s
	  kube-system                 kindnet-ms6qb                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m11s
	  kube-system                 kube-apiserver-ha-685475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-controller-manager-ha-685475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-proxy-b8x2w                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-scheduler-ha-685475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-vip-ha-685475                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m8s   kube-proxy       
	  Normal  Starting                 6m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m15s  kubelet          Node ha-685475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m15s  kubelet          Node ha-685475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m15s  kubelet          Node ha-685475 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m12s  node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	  Normal  NodeReady                5m57s  kubelet          Node ha-685475 status is now: NodeReady
	  Normal  RegisteredNode           5m17s  node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	  Normal  RegisteredNode           4m3s   node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	
	
	Name:               ha-685475-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T18_42_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:42:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:44:53 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 24 Sep 2024 18:44:12 +0000   Tue, 24 Sep 2024 18:45:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 24 Sep 2024 18:44:12 +0000   Tue, 24 Sep 2024 18:45:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 24 Sep 2024 18:44:12 +0000   Tue, 24 Sep 2024 18:45:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 24 Sep 2024 18:44:12 +0000   Tue, 24 Sep 2024 18:45:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-685475-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad56c26961cf4d94852f19122c4c499b
	  System UUID:                ad56c269-61cf-4d94-852f-19122c4c499b
	  Boot ID:                    e772e23b-db48-4470-a822-ef2e8ff749c3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w6g8l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-685475-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m23s
	  kube-system                 kindnet-pwvfj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m24s
	  kube-system                 kube-apiserver-ha-685475-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-controller-manager-ha-685475-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-proxy-dlr8f                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-scheduler-ha-685475-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-vip-ha-685475-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node ha-685475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node ha-685475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m25s)  kubelet          Node ha-685475-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  NodeNotReady             118s                   node-controller  Node ha-685475-m02 status is now: NodeNotReady
	
	
	Name:               ha-685475-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T18_43_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:43:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:47:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:44:24 +0000   Tue, 24 Sep 2024 18:43:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:44:24 +0000   Tue, 24 Sep 2024 18:43:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:44:24 +0000   Tue, 24 Sep 2024 18:43:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:44:24 +0000   Tue, 24 Sep 2024 18:43:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-685475-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 666f55d24f014a7598addca9cb06654f
	  System UUID:                666f55d2-4f01-4a75-98ad-dca9cb06654f
	  Boot ID:                    4a6f3fd5-8906-4dce-b1f1-42fe5e6d144d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gksmx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-685475-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m10s
	  kube-system                 kindnet-7w5dn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m12s
	  kube-system                 kube-apiserver-ha-685475-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-controller-manager-ha-685475-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-proxy-mzlfj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-scheduler-ha-685475-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-vip-ha-685475-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m12s)  kubelet          Node ha-685475-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m12s)  kubelet          Node ha-685475-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x7 over 4m12s)  kubelet          Node ha-685475-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-685475-m03 event: Registered Node ha-685475-m03 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-685475-m03 event: Registered Node ha-685475-m03 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-685475-m03 event: Registered Node ha-685475-m03 in Controller
	
	
	Name:               ha-685475-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T18_44_24_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:44:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:47:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:44:54 +0000   Tue, 24 Sep 2024 18:44:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:44:54 +0000   Tue, 24 Sep 2024 18:44:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:44:54 +0000   Tue, 24 Sep 2024 18:44:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:44:54 +0000   Tue, 24 Sep 2024 18:44:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    ha-685475-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5be0e3597a0f4236b1fa9e5e221d49dc
	  System UUID:                5be0e359-7a0f-4236-b1fa-9e5e221d49dc
	  Boot ID:                    076086b0-4e87-4ae6-8221-9f0322235896
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-n4nlv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m10s
	  kube-system                 kube-proxy-9m62z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m10s (x2 over 3m11s)  kubelet          Node ha-685475-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m10s (x2 over 3m11s)  kubelet          Node ha-685475-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m10s (x2 over 3m11s)  kubelet          Node ha-685475-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal  RegisteredNode           3m7s                   node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal  RegisteredNode           3m7s                   node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal  NodeReady                2m52s                  kubelet          Node ha-685475-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep24 18:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047306] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036787] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.684392] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.705375] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.505519] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep24 18:41] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.056998] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056884] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.156659] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.148421] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.267579] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +3.782999] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +3.621822] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.062553] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.171108] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.082463] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.344664] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.133235] kauditd_printk_skb: 38 callbacks suppressed
	[Sep24 18:42] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707] <==
	{"level":"warn","ts":"2024-09-24T18:47:34.293778Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.298785Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.305164Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.307915Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.310695Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.319770Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.325786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.326919Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.333428Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.336845Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.339497Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.385945Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.396717Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.402373Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.408157Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.412020Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.415076Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.417636Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.426398Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.491924Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.497103Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.502392Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.526403Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.528449Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:47:34.554728Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:47:34 up 6 min,  0 users,  load average: 0.04, 0.20, 0.11
	Linux ha-685475 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678] <==
	I0924 18:46:56.555675       1 main.go:299] handling current node
	I0924 18:47:06.561634       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:47:06.561692       1 main.go:299] handling current node
	I0924 18:47:06.561710       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:47:06.561715       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:47:06.561848       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:47:06.561866       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:47:06.561914       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:47:06.561931       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:47:16.564762       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:47:16.564893       1 main.go:299] handling current node
	I0924 18:47:16.564926       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:47:16.564945       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:47:16.565064       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:47:16.565119       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:47:16.565194       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:47:16.565212       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:47:26.555520       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:47:26.555700       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:47:26.555958       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:47:26.556011       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:47:26.556121       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:47:26.556157       1 main.go:299] handling current node
	I0924 18:47:26.556192       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:47:26.556214       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad] <==
	I0924 18:41:17.672745       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 18:41:17.723505       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0924 18:41:17.816990       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0924 18:41:17.823594       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.7]
	I0924 18:41:17.824633       1 controller.go:615] quota admission added evaluator for: endpoints
	I0924 18:41:17.829868       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 18:41:18.021888       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0924 18:41:19.286470       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0924 18:41:19.299197       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0924 18:41:19.310963       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0924 18:41:23.075217       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0924 18:41:23.423831       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0924 18:43:54.268115       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58120: use of closed network connection
	E0924 18:43:54.604143       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58158: use of closed network connection
	E0924 18:43:54.783115       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58164: use of closed network connection
	E0924 18:43:54.950893       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58168: use of closed network connection
	E0924 18:43:55.309336       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58194: use of closed network connection
	E0924 18:43:55.511247       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58214: use of closed network connection
	E0924 18:43:55.954224       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58254: use of closed network connection
	E0924 18:43:56.117109       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58266: use of closed network connection
	E0924 18:43:56.281611       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58282: use of closed network connection
	E0924 18:43:56.451342       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58292: use of closed network connection
	E0924 18:43:56.632767       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58308: use of closed network connection
	E0924 18:43:56.794004       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58330: use of closed network connection
	W0924 18:45:17.827671       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.7 192.168.39.84]
	
	
	==> kube-controller-manager [5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8] <==
	I0924 18:44:24.247180       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:24.247492       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:24.265765       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:24.436622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m03"
	I0924 18:44:24.498908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:24.871884       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:26.085940       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:27.805304       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:27.915596       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:27.967113       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:27.968167       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-685475-m04"
	I0924 18:44:28.400258       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:34.420054       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:42.456619       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-685475-m04"
	I0924 18:44:42.456667       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:42.471240       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:42.830571       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:44:54.874379       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:45:36.091506       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m02"
	I0924 18:45:36.091566       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-685475-m04"
	I0924 18:45:36.110189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m02"
	I0924 18:45:36.281556       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.99566ms"
	I0924 18:45:36.282660       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="101.243µs"
	I0924 18:45:38.045778       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m02"
	I0924 18:45:41.375346       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m02"
	
	
	==> kube-proxy [9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 18:41:25.700409       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 18:41:25.766662       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.7"]
	E0924 18:41:25.766911       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 18:41:25.811114       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 18:41:25.811144       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 18:41:25.811180       1 server_linux.go:169] "Using iptables Proxier"
	I0924 18:41:25.813724       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 18:41:25.814452       1 server.go:483] "Version info" version="v1.31.1"
	I0924 18:41:25.814533       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:41:25.818487       1 config.go:199] "Starting service config controller"
	I0924 18:41:25.819365       1 config.go:105] "Starting endpoint slice config controller"
	I0924 18:41:25.820408       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 18:41:25.820718       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 18:41:25.821642       1 config.go:328] "Starting node config controller"
	I0924 18:41:25.822952       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 18:41:25.921008       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 18:41:25.923339       1 shared_informer.go:320] Caches are synced for node config
	I0924 18:41:25.923395       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc] <==
	W0924 18:41:16.961127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 18:41:16.961178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:16.962189       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 18:41:16.962268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.047239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 18:41:17.047364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.102252       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0924 18:41:17.102364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.222048       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 18:41:17.222166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.230553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 18:41:17.231072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.384731       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 18:41:17.384781       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 18:41:17.385753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 18:41:17.385816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0924 18:41:20.277859       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0924 18:43:50.159728       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w6g8l\": pod busybox-7dff88458-w6g8l is already assigned to node \"ha-685475-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-w6g8l" node="ha-685475-m02"
	E0924 18:43:50.159906       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w6g8l\": pod busybox-7dff88458-w6g8l is already assigned to node \"ha-685475-m02\"" pod="default/busybox-7dff88458-w6g8l"
	E0924 18:43:50.160616       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hmkfk\": pod busybox-7dff88458-hmkfk is already assigned to node \"ha-685475\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hmkfk" node="ha-685475"
	E0924 18:43:50.160683       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hmkfk\": pod busybox-7dff88458-hmkfk is already assigned to node \"ha-685475\"" pod="default/busybox-7dff88458-hmkfk"
	E0924 18:44:24.296261       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9m62z\": pod kube-proxy-9m62z is already assigned to node \"ha-685475-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9m62z" node="ha-685475-m04"
	E0924 18:44:24.296334       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d172ae09-1eb7-4e5d-a5a1-e865b926b6eb(kube-system/kube-proxy-9m62z) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-9m62z"
	E0924 18:44:24.296350       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9m62z\": pod kube-proxy-9m62z is already assigned to node \"ha-685475-m04\"" pod="kube-system/kube-proxy-9m62z"
	I0924 18:44:24.296367       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9m62z" node="ha-685475-m04"
	
	
	==> kubelet <==
	Sep 24 18:46:19 ha-685475 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 18:46:19 ha-685475 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 18:46:19 ha-685475 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 18:46:19 ha-685475 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 18:46:19 ha-685475 kubelet[1306]: E0924 18:46:19.289533    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203579289109382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:19 ha-685475 kubelet[1306]: E0924 18:46:19.289568    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203579289109382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:29 ha-685475 kubelet[1306]: E0924 18:46:29.292185    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203589291965830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:29 ha-685475 kubelet[1306]: E0924 18:46:29.292494    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203589291965830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:39 ha-685475 kubelet[1306]: E0924 18:46:39.293680    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203599293434791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:39 ha-685475 kubelet[1306]: E0924 18:46:39.293717    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203599293434791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:49 ha-685475 kubelet[1306]: E0924 18:46:49.295059    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203609294682424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:49 ha-685475 kubelet[1306]: E0924 18:46:49.295397    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203609294682424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:59 ha-685475 kubelet[1306]: E0924 18:46:59.296553    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203619296254794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:46:59 ha-685475 kubelet[1306]: E0924 18:46:59.296987    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203619296254794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:47:09 ha-685475 kubelet[1306]: E0924 18:47:09.298543    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203629298152404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:47:09 ha-685475 kubelet[1306]: E0924 18:47:09.301982    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203629298152404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:47:19 ha-685475 kubelet[1306]: E0924 18:47:19.239486    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 18:47:19 ha-685475 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 18:47:19 ha-685475 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 18:47:19 ha-685475 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 18:47:19 ha-685475 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 18:47:19 ha-685475 kubelet[1306]: E0924 18:47:19.303369    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203639303146026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:47:19 ha-685475 kubelet[1306]: E0924 18:47:19.303405    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203639303146026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:47:29 ha-685475 kubelet[1306]: E0924 18:47:29.304637    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203649304338764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:47:29 ha-685475 kubelet[1306]: E0924 18:47:29.304658    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203649304338764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-685475 -n ha-685475
helpers_test.go:261: (dbg) Run:  kubectl --context ha-685475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (368.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-685475 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-685475 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-685475 -v=7 --alsologtostderr: exit status 82 (2m1.719092165s)

                                                
                                                
-- stdout --
	* Stopping node "ha-685475-m04"  ...
	* Stopping node "ha-685475-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 18:47:39.694046   27994 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:47:39.694268   27994 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:47:39.694276   27994 out.go:358] Setting ErrFile to fd 2...
	I0924 18:47:39.694280   27994 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:47:39.694431   27994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 18:47:39.694638   27994 out.go:352] Setting JSON to false
	I0924 18:47:39.694721   27994 mustload.go:65] Loading cluster: ha-685475
	I0924 18:47:39.695125   27994 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:47:39.695229   27994 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:47:39.695407   27994 mustload.go:65] Loading cluster: ha-685475
	I0924 18:47:39.695581   27994 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:47:39.695622   27994 stop.go:39] StopHost: ha-685475-m04
	I0924 18:47:39.696036   27994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:47:39.696082   27994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:47:39.710200   27994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41875
	I0924 18:47:39.710638   27994 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:47:39.711178   27994 main.go:141] libmachine: Using API Version  1
	I0924 18:47:39.711204   27994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:47:39.711546   27994 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:47:39.713775   27994 out.go:177] * Stopping node "ha-685475-m04"  ...
	I0924 18:47:39.714817   27994 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0924 18:47:39.714864   27994 main.go:141] libmachine: (ha-685475-m04) Calling .DriverName
	I0924 18:47:39.715092   27994 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0924 18:47:39.715129   27994 main.go:141] libmachine: (ha-685475-m04) Calling .GetSSHHostname
	I0924 18:47:39.717722   27994 main.go:141] libmachine: (ha-685475-m04) DBG | domain ha-685475-m04 has defined MAC address 52:54:00:46:d7:0c in network mk-ha-685475
	I0924 18:47:39.718137   27994 main.go:141] libmachine: (ha-685475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:d7:0c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:44:11 +0000 UTC Type:0 Mac:52:54:00:46:d7:0c Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-685475-m04 Clientid:01:52:54:00:46:d7:0c}
	I0924 18:47:39.718179   27994 main.go:141] libmachine: (ha-685475-m04) DBG | domain ha-685475-m04 has defined IP address 192.168.39.123 and MAC address 52:54:00:46:d7:0c in network mk-ha-685475
	I0924 18:47:39.718308   27994 main.go:141] libmachine: (ha-685475-m04) Calling .GetSSHPort
	I0924 18:47:39.718465   27994 main.go:141] libmachine: (ha-685475-m04) Calling .GetSSHKeyPath
	I0924 18:47:39.718598   27994 main.go:141] libmachine: (ha-685475-m04) Calling .GetSSHUsername
	I0924 18:47:39.718726   27994 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m04/id_rsa Username:docker}
	I0924 18:47:39.797328   27994 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0924 18:47:39.849318   27994 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0924 18:47:39.902604   27994 main.go:141] libmachine: Stopping "ha-685475-m04"...
	I0924 18:47:39.902635   27994 main.go:141] libmachine: (ha-685475-m04) Calling .GetState
	I0924 18:47:39.904142   27994 main.go:141] libmachine: (ha-685475-m04) Calling .Stop
	I0924 18:47:39.907577   27994 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 0/120
	I0924 18:47:40.972161   27994 main.go:141] libmachine: (ha-685475-m04) Calling .GetState
	I0924 18:47:40.973483   27994 main.go:141] libmachine: Machine "ha-685475-m04" was stopped.
	I0924 18:47:40.973501   27994 stop.go:75] duration metric: took 1.258685262s to stop
	I0924 18:47:40.973541   27994 stop.go:39] StopHost: ha-685475-m03
	I0924 18:47:40.973969   27994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:47:40.974022   27994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:47:40.988316   27994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32933
	I0924 18:47:40.988735   27994 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:47:40.989224   27994 main.go:141] libmachine: Using API Version  1
	I0924 18:47:40.989244   27994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:47:40.989601   27994 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:47:40.991589   27994 out.go:177] * Stopping node "ha-685475-m03"  ...
	I0924 18:47:40.992888   27994 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0924 18:47:40.992910   27994 main.go:141] libmachine: (ha-685475-m03) Calling .DriverName
	I0924 18:47:40.993098   27994 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0924 18:47:40.993118   27994 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHHostname
	I0924 18:47:40.995629   27994 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:47:40.996023   27994 main.go:141] libmachine: (ha-685475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f3:5c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:42:48 +0000 UTC Type:0 Mac:52:54:00:47:f3:5c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-685475-m03 Clientid:01:52:54:00:47:f3:5c}
	I0924 18:47:40.996083   27994 main.go:141] libmachine: (ha-685475-m03) DBG | domain ha-685475-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:47:f3:5c in network mk-ha-685475
	I0924 18:47:40.996148   27994 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHPort
	I0924 18:47:40.996318   27994 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHKeyPath
	I0924 18:47:40.996465   27994 main.go:141] libmachine: (ha-685475-m03) Calling .GetSSHUsername
	I0924 18:47:40.996568   27994 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m03/id_rsa Username:docker}
	I0924 18:47:41.072831   27994 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0924 18:47:41.124894   27994 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0924 18:47:41.178568   27994 main.go:141] libmachine: Stopping "ha-685475-m03"...
	I0924 18:47:41.178605   27994 main.go:141] libmachine: (ha-685475-m03) Calling .GetState
	I0924 18:47:41.180348   27994 main.go:141] libmachine: (ha-685475-m03) Calling .Stop
	I0924 18:47:41.183644   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 0/120
	I0924 18:47:42.185156   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 1/120
	I0924 18:47:43.186657   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 2/120
	I0924 18:47:44.188604   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 3/120
	I0924 18:47:45.189983   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 4/120
	I0924 18:47:46.191662   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 5/120
	I0924 18:47:47.193418   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 6/120
	I0924 18:47:48.194700   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 7/120
	I0924 18:47:49.196301   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 8/120
	I0924 18:47:50.197433   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 9/120
	I0924 18:47:51.199685   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 10/120
	I0924 18:47:52.201148   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 11/120
	I0924 18:47:53.202568   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 12/120
	I0924 18:47:54.204121   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 13/120
	I0924 18:47:55.205646   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 14/120
	I0924 18:47:56.207981   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 15/120
	I0924 18:47:57.209489   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 16/120
	I0924 18:47:58.211246   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 17/120
	I0924 18:47:59.212863   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 18/120
	I0924 18:48:00.214250   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 19/120
	I0924 18:48:01.216337   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 20/120
	I0924 18:48:02.217785   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 21/120
	I0924 18:48:03.219335   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 22/120
	I0924 18:48:04.221255   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 23/120
	I0924 18:48:05.223030   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 24/120
	I0924 18:48:06.225151   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 25/120
	I0924 18:48:07.226861   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 26/120
	I0924 18:48:08.228792   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 27/120
	I0924 18:48:09.230267   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 28/120
	I0924 18:48:10.231603   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 29/120
	I0924 18:48:11.233584   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 30/120
	I0924 18:48:12.235095   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 31/120
	I0924 18:48:13.236491   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 32/120
	I0924 18:48:14.238033   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 33/120
	I0924 18:48:15.239370   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 34/120
	I0924 18:48:16.241065   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 35/120
	I0924 18:48:17.242254   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 36/120
	I0924 18:48:18.243870   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 37/120
	I0924 18:48:19.245494   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 38/120
	I0924 18:48:20.246851   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 39/120
	I0924 18:48:21.248619   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 40/120
	I0924 18:48:22.249954   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 41/120
	I0924 18:48:23.251396   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 42/120
	I0924 18:48:24.252675   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 43/120
	I0924 18:48:25.254053   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 44/120
	I0924 18:48:26.255866   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 45/120
	I0924 18:48:27.257394   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 46/120
	I0924 18:48:28.258641   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 47/120
	I0924 18:48:29.259969   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 48/120
	I0924 18:48:30.261260   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 49/120
	I0924 18:48:31.262956   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 50/120
	I0924 18:48:32.264368   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 51/120
	I0924 18:48:33.265665   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 52/120
	I0924 18:48:34.267109   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 53/120
	I0924 18:48:35.268428   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 54/120
	I0924 18:48:36.270097   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 55/120
	I0924 18:48:37.271482   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 56/120
	I0924 18:48:38.272905   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 57/120
	I0924 18:48:39.274150   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 58/120
	I0924 18:48:40.275419   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 59/120
	I0924 18:48:41.276988   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 60/120
	I0924 18:48:42.278393   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 61/120
	I0924 18:48:43.279762   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 62/120
	I0924 18:48:44.281127   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 63/120
	I0924 18:48:45.282398   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 64/120
	I0924 18:48:46.284273   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 65/120
	I0924 18:48:47.285599   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 66/120
	I0924 18:48:48.287039   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 67/120
	I0924 18:48:49.288464   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 68/120
	I0924 18:48:50.289743   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 69/120
	I0924 18:48:51.291490   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 70/120
	I0924 18:48:52.292881   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 71/120
	I0924 18:48:53.294480   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 72/120
	I0924 18:48:54.295701   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 73/120
	I0924 18:48:55.297133   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 74/120
	I0924 18:48:56.298785   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 75/120
	I0924 18:48:57.300164   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 76/120
	I0924 18:48:58.301527   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 77/120
	I0924 18:48:59.302703   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 78/120
	I0924 18:49:00.304148   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 79/120
	I0924 18:49:01.305796   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 80/120
	I0924 18:49:02.307208   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 81/120
	I0924 18:49:03.308702   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 82/120
	I0924 18:49:04.309979   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 83/120
	I0924 18:49:05.311293   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 84/120
	I0924 18:49:06.313048   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 85/120
	I0924 18:49:07.314448   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 86/120
	I0924 18:49:08.315793   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 87/120
	I0924 18:49:09.317076   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 88/120
	I0924 18:49:10.318461   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 89/120
	I0924 18:49:11.320099   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 90/120
	I0924 18:49:12.322015   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 91/120
	I0924 18:49:13.323384   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 92/120
	I0924 18:49:14.324886   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 93/120
	I0924 18:49:15.326354   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 94/120
	I0924 18:49:16.328156   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 95/120
	I0924 18:49:17.329408   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 96/120
	I0924 18:49:18.331170   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 97/120
	I0924 18:49:19.332600   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 98/120
	I0924 18:49:20.333927   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 99/120
	I0924 18:49:21.335686   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 100/120
	I0924 18:49:22.336862   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 101/120
	I0924 18:49:23.338172   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 102/120
	I0924 18:49:24.339417   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 103/120
	I0924 18:49:25.340643   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 104/120
	I0924 18:49:26.342437   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 105/120
	I0924 18:49:27.343927   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 106/120
	I0924 18:49:28.345346   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 107/120
	I0924 18:49:29.346720   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 108/120
	I0924 18:49:30.348008   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 109/120
	I0924 18:49:31.349762   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 110/120
	I0924 18:49:32.351350   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 111/120
	I0924 18:49:33.352667   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 112/120
	I0924 18:49:34.354275   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 113/120
	I0924 18:49:35.355892   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 114/120
	I0924 18:49:36.357779   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 115/120
	I0924 18:49:37.359221   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 116/120
	I0924 18:49:38.360624   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 117/120
	I0924 18:49:39.361867   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 118/120
	I0924 18:49:40.363188   27994 main.go:141] libmachine: (ha-685475-m03) Waiting for machine to stop 119/120
	I0924 18:49:41.364316   27994 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0924 18:49:41.364357   27994 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0924 18:49:41.366686   27994 out.go:201] 
	W0924 18:49:41.368206   27994 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0924 18:49:41.368220   27994 out.go:270] * 
	* 
	W0924 18:49:41.370250   27994 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 18:49:41.371710   27994 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-685475 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-685475 --wait=true -v=7 --alsologtostderr
E0924 18:49:49.794849   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:50:17.493924   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:52:24.267231   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-685475 --wait=true -v=7 --alsologtostderr: (4m4.32288434s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-685475
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-685475 -n ha-685475
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 logs -n 25
E0924 18:53:47.330818   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-685475 logs -n 25: (1.541063319s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m02:/home/docker/cp-test_ha-685475-m03_ha-685475-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m02 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /home/docker/cp-test_ha-685475-m03_ha-685475-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m04:/home/docker/cp-test_ha-685475-m03_ha-685475-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m04 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /home/docker/cp-test_ha-685475-m03_ha-685475-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-685475 cp testdata/cp-test.txt                                               | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile399016322/001/cp-test_ha-685475-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475:/home/docker/cp-test_ha-685475-m04_ha-685475.txt                      |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475 sudo cat                                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-685475-m04_ha-685475.txt                                |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m02:/home/docker/cp-test_ha-685475-m04_ha-685475-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m02 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-685475-m04_ha-685475-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m03:/home/docker/cp-test_ha-685475-m04_ha-685475-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m03 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-685475-m04_ha-685475-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-685475 node stop m02 -v=7                                                    | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-685475 node start m02 -v=7                                                   | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:47 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-685475 -v=7                                                          | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:47 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-685475 -v=7                                                               | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:47 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-685475 --wait=true -v=7                                                   | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:49 UTC | 24 Sep 24 18:53 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-685475                                                               | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:53 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:49:41
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:49:41.416395   28466 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:49:41.416639   28466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:49:41.416647   28466 out.go:358] Setting ErrFile to fd 2...
	I0924 18:49:41.416652   28466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:49:41.416833   28466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 18:49:41.417337   28466 out.go:352] Setting JSON to false
	I0924 18:49:41.418248   28466 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1932,"bootTime":1727201849,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:49:41.418338   28466 start.go:139] virtualization: kvm guest
	I0924 18:49:41.420741   28466 out.go:177] * [ha-685475] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 18:49:41.422252   28466 notify.go:220] Checking for updates...
	I0924 18:49:41.422298   28466 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:49:41.423695   28466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:49:41.425001   28466 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:49:41.426516   28466 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:49:41.427970   28466 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 18:49:41.429351   28466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:49:41.431275   28466 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:49:41.431373   28466 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:49:41.431805   28466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:49:41.431860   28466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:49:41.447208   28466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44479
	I0924 18:49:41.447693   28466 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:49:41.448258   28466 main.go:141] libmachine: Using API Version  1
	I0924 18:49:41.448282   28466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:49:41.448638   28466 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:49:41.448797   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:49:41.485733   28466 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 18:49:41.486976   28466 start.go:297] selected driver: kvm2
	I0924 18:49:41.486994   28466 start.go:901] validating driver "kvm2" against &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.123 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:49:41.487112   28466 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:49:41.487450   28466 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:49:41.487529   28466 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 18:49:41.503039   28466 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 18:49:41.503725   28466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:49:41.503754   28466 cni.go:84] Creating CNI manager for ""
	I0924 18:49:41.503780   28466 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0924 18:49:41.503824   28466 start.go:340] cluster config:
	{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.123 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:fal
se ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:49:41.503959   28466 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:49:41.505754   28466 out.go:177] * Starting "ha-685475" primary control-plane node in "ha-685475" cluster
	I0924 18:49:41.507135   28466 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:49:41.507191   28466 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 18:49:41.507203   28466 cache.go:56] Caching tarball of preloaded images
	I0924 18:49:41.507285   28466 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 18:49:41.507297   28466 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 18:49:41.507422   28466 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:49:41.507688   28466 start.go:360] acquireMachinesLock for ha-685475: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 18:49:41.507737   28466 start.go:364] duration metric: took 29.748µs to acquireMachinesLock for "ha-685475"
	I0924 18:49:41.507757   28466 start.go:96] Skipping create...Using existing machine configuration
	I0924 18:49:41.507766   28466 fix.go:54] fixHost starting: 
	I0924 18:49:41.508061   28466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:49:41.508099   28466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:49:41.522542   28466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41175
	I0924 18:49:41.522936   28466 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:49:41.523401   28466 main.go:141] libmachine: Using API Version  1
	I0924 18:49:41.523425   28466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:49:41.523886   28466 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:49:41.524081   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:49:41.524255   28466 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:49:41.525826   28466 fix.go:112] recreateIfNeeded on ha-685475: state=Running err=<nil>
	W0924 18:49:41.525859   28466 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 18:49:41.529240   28466 out.go:177] * Updating the running kvm2 "ha-685475" VM ...
	I0924 18:49:41.530712   28466 machine.go:93] provisionDockerMachine start ...
	I0924 18:49:41.530738   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:49:41.530974   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:49:41.533165   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.533580   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:49:41.533605   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.533782   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:49:41.533967   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:41.534112   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:41.534223   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:49:41.534332   28466 main.go:141] libmachine: Using SSH client type: native
	I0924 18:49:41.534517   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:49:41.534529   28466 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 18:49:41.643232   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685475
	
	I0924 18:49:41.643261   28466 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:49:41.643481   28466 buildroot.go:166] provisioning hostname "ha-685475"
	I0924 18:49:41.643503   28466 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:49:41.643646   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:49:41.646212   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.646505   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:49:41.646531   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.646762   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:49:41.646980   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:41.647132   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:41.647272   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:49:41.647448   28466 main.go:141] libmachine: Using SSH client type: native
	I0924 18:49:41.647651   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:49:41.647664   28466 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-685475 && echo "ha-685475" | sudo tee /etc/hostname
	I0924 18:49:41.779674   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685475
	
	I0924 18:49:41.779708   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:49:41.782468   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.782847   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:49:41.782871   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.783056   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:49:41.783235   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:41.783401   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:41.783498   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:49:41.783622   28466 main.go:141] libmachine: Using SSH client type: native
	I0924 18:49:41.783822   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:49:41.783838   28466 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-685475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-685475/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-685475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 18:49:41.891295   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:49:41.891327   28466 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 18:49:41.891373   28466 buildroot.go:174] setting up certificates
	I0924 18:49:41.891383   28466 provision.go:84] configureAuth start
	I0924 18:49:41.891396   28466 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:49:41.891628   28466 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:49:41.894270   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.894622   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:49:41.894649   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.894778   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:49:41.896936   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.897279   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:49:41.897300   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.897473   28466 provision.go:143] copyHostCerts
	I0924 18:49:41.897496   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:49:41.897531   28466 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 18:49:41.897543   28466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:49:41.897622   28466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 18:49:41.897720   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:49:41.897745   28466 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 18:49:41.897755   28466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:49:41.897789   28466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 18:49:41.897849   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:49:41.897869   28466 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 18:49:41.897887   28466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:49:41.897925   28466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 18:49:41.897989   28466 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.ha-685475 san=[127.0.0.1 192.168.39.7 ha-685475 localhost minikube]
	I0924 18:49:42.055432   28466 provision.go:177] copyRemoteCerts
	I0924 18:49:42.055488   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 18:49:42.055508   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:49:42.057935   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:42.058260   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:49:42.058288   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:42.058448   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:49:42.058639   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:42.058797   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:49:42.058931   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:49:42.144208   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 18:49:42.144266   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 18:49:42.169405   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 18:49:42.169472   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0924 18:49:42.192469   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 18:49:42.192528   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 18:49:42.217239   28466 provision.go:87] duration metric: took 325.844928ms to configureAuth
	I0924 18:49:42.217266   28466 buildroot.go:189] setting minikube options for container-runtime
	I0924 18:49:42.217508   28466 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:49:42.217585   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:49:42.220321   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:42.220734   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:49:42.220759   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:42.220964   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:49:42.221168   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:42.221408   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:42.221555   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:49:42.221699   28466 main.go:141] libmachine: Using SSH client type: native
	I0924 18:49:42.221901   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:49:42.221921   28466 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 18:51:12.912067   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 18:51:12.912095   28466 machine.go:96] duration metric: took 1m31.381364631s to provisionDockerMachine
	I0924 18:51:12.912107   28466 start.go:293] postStartSetup for "ha-685475" (driver="kvm2")
	I0924 18:51:12.912117   28466 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:51:12.912132   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:51:12.912403   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:51:12.912427   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:51:12.915611   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:12.916024   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:51:12.916049   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:12.916219   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:51:12.916390   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:51:12.916547   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:51:12.916627   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:51:12.996794   28466 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 18:51:13.001014   28466 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 18:51:13.001036   28466 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 18:51:13.001100   28466 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 18:51:13.001176   28466 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 18:51:13.001185   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /etc/ssl/certs/109492.pem
	I0924 18:51:13.001271   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 18:51:13.010156   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:51:13.032949   28466 start.go:296] duration metric: took 120.828545ms for postStartSetup
	I0924 18:51:13.032997   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:51:13.033245   28466 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0924 18:51:13.033275   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:51:13.035773   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.036149   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:51:13.036176   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.036325   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:51:13.036515   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:51:13.036714   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:51:13.036858   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	W0924 18:51:13.116202   28466 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0924 18:51:13.116227   28466 fix.go:56] duration metric: took 1m31.608462639s for fixHost
	I0924 18:51:13.116245   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:51:13.119152   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.119484   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:51:13.119507   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.119696   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:51:13.119893   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:51:13.120022   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:51:13.120150   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:51:13.120266   28466 main.go:141] libmachine: Using SSH client type: native
	I0924 18:51:13.120454   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:51:13.120466   28466 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 18:51:13.239336   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727203873.209544799
	
	I0924 18:51:13.239356   28466 fix.go:216] guest clock: 1727203873.209544799
	I0924 18:51:13.239365   28466 fix.go:229] Guest: 2024-09-24 18:51:13.209544799 +0000 UTC Remote: 2024-09-24 18:51:13.116232987 +0000 UTC m=+91.734483744 (delta=93.311812ms)
	I0924 18:51:13.239396   28466 fix.go:200] guest clock delta is within tolerance: 93.311812ms
	I0924 18:51:13.239402   28466 start.go:83] releasing machines lock for "ha-685475", held for 1m31.731654477s
	I0924 18:51:13.239426   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:51:13.239702   28466 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:51:13.242484   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.242890   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:51:13.242915   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.243055   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:51:13.243574   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:51:13.243740   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:51:13.243820   28466 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 18:51:13.243852   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:51:13.243943   28466 ssh_runner.go:195] Run: cat /version.json
	I0924 18:51:13.243963   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:51:13.246494   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.246586   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.246861   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:51:13.246884   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.246911   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:51:13.246925   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.247052   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:51:13.247146   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:51:13.247218   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:51:13.247276   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:51:13.247332   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:51:13.247384   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:51:13.247462   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:51:13.247483   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:51:13.323180   28466 ssh_runner.go:195] Run: systemctl --version
	I0924 18:51:13.345830   28466 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 18:51:13.497788   28466 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 18:51:13.503037   28466 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 18:51:13.503095   28466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:51:13.511308   28466 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0924 18:51:13.511326   28466 start.go:495] detecting cgroup driver to use...
	I0924 18:51:13.511381   28466 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 18:51:13.526534   28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 18:51:13.540133   28466 docker.go:217] disabling cri-docker service (if available) ...
	I0924 18:51:13.540182   28466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 18:51:13.553431   28466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 18:51:13.566458   28466 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 18:51:13.725268   28466 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 18:51:13.878455   28466 docker.go:233] disabling docker service ...
	I0924 18:51:13.878528   28466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 18:51:13.897552   28466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 18:51:13.910756   28466 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 18:51:14.059929   28466 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 18:51:14.221349   28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 18:51:14.235950   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:51:14.253798   28466 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 18:51:14.253871   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:51:14.264318   28466 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 18:51:14.264386   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:51:14.274458   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:51:14.284280   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:51:14.294214   28466 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:51:14.304407   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:51:14.314343   28466 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:51:14.324682   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:51:14.336710   28466 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:51:14.345836   28466 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:51:14.355292   28466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:51:14.499840   28466 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 18:51:19.440001   28466 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.940112457s)
	I0924 18:51:19.440030   28466 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 18:51:19.440083   28466 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 18:51:19.444889   28466 start.go:563] Will wait 60s for crictl version
	I0924 18:51:19.444936   28466 ssh_runner.go:195] Run: which crictl
	I0924 18:51:19.448552   28466 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 18:51:19.485550   28466 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 18:51:19.485641   28466 ssh_runner.go:195] Run: crio --version
	I0924 18:51:19.513377   28466 ssh_runner.go:195] Run: crio --version
	I0924 18:51:19.543102   28466 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 18:51:19.544497   28466 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:51:19.547112   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:19.547442   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:51:19.547465   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:19.547660   28466 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 18:51:19.552108   28466 kubeadm.go:883] updating cluster {Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.123 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 18:51:19.552295   28466 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:51:19.552356   28466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:51:19.593827   28466 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 18:51:19.593856   28466 crio.go:433] Images already preloaded, skipping extraction
	I0924 18:51:19.593907   28466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:51:19.625890   28466 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 18:51:19.625909   28466 cache_images.go:84] Images are preloaded, skipping loading
	I0924 18:51:19.625917   28466 kubeadm.go:934] updating node { 192.168.39.7 8443 v1.31.1 crio true true} ...
	I0924 18:51:19.625996   28466 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-685475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 18:51:19.626053   28466 ssh_runner.go:195] Run: crio config
	I0924 18:51:19.670333   28466 cni.go:84] Creating CNI manager for ""
	I0924 18:51:19.670351   28466 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0924 18:51:19.670359   28466 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 18:51:19.670378   28466 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-685475 NodeName:ha-685475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 18:51:19.670530   28466 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-685475"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 18:51:19.670555   28466 kube-vip.go:115] generating kube-vip config ...
	I0924 18:51:19.670605   28466 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 18:51:19.681434   28466 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 18:51:19.681567   28466 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0924 18:51:19.681634   28466 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:51:19.690576   28466 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 18:51:19.690652   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0924 18:51:19.699728   28466 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0924 18:51:19.715233   28466 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:51:19.730436   28466 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0924 18:51:19.745596   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0924 18:51:19.762934   28466 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 18:51:19.766460   28466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:51:19.910949   28466 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:51:19.924822   28466 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475 for IP: 192.168.39.7
	I0924 18:51:19.924848   28466 certs.go:194] generating shared ca certs ...
	I0924 18:51:19.924865   28466 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:51:19.925032   28466 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 18:51:19.925090   28466 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 18:51:19.925106   28466 certs.go:256] generating profile certs ...
	I0924 18:51:19.925212   28466 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key
	I0924 18:51:19.925243   28466 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.4e1f7038
	I0924 18:51:19.925263   28466 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.4e1f7038 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.17 192.168.39.84 192.168.39.254]
	I0924 18:51:20.052965   28466 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.4e1f7038 ...
	I0924 18:51:20.052996   28466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.4e1f7038: {Name:mk85a34bb2d27d29b43a53b52a4110514c1f2ddd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:51:20.053193   28466 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.4e1f7038 ...
	I0924 18:51:20.053210   28466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.4e1f7038: {Name:mk517342573979c2bae667d9fe14d0191c724102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:51:20.053305   28466 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.4e1f7038 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt
	I0924 18:51:20.053471   28466 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.4e1f7038 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key
	I0924 18:51:20.053635   28466 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key
	I0924 18:51:20.053653   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 18:51:20.053672   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 18:51:20.053690   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 18:51:20.053707   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 18:51:20.053723   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 18:51:20.053737   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 18:51:20.053755   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 18:51:20.053772   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 18:51:20.053834   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 18:51:20.053877   28466 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 18:51:20.053895   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 18:51:20.053928   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 18:51:20.053957   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:51:20.053984   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 18:51:20.054049   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:51:20.054082   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:51:20.054103   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem -> /usr/share/ca-certificates/10949.pem
	I0924 18:51:20.054121   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /usr/share/ca-certificates/109492.pem
	I0924 18:51:20.054693   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:51:20.078522   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 18:51:20.101679   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:51:20.123948   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 18:51:20.145466   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0924 18:51:20.167037   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 18:51:20.235432   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:51:20.268335   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 18:51:20.320288   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:51:20.384090   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 18:51:20.451905   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 18:51:20.618753   28466 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 18:51:20.732295   28466 ssh_runner.go:195] Run: openssl version
	I0924 18:51:20.756467   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:51:20.801914   28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:51:20.842090   28466 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:51:20.842152   28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:51:20.872234   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:51:20.950378   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 18:51:21.017453   28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 18:51:21.045197   28466 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 18:51:21.045268   28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 18:51:21.087738   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 18:51:21.117324   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 18:51:21.152311   28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 18:51:21.160918   28466 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 18:51:21.160974   28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 18:51:21.173180   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 18:51:21.261879   28466 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:51:21.284126   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 18:51:21.299535   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 18:51:21.318232   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 18:51:21.332109   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 18:51:21.346404   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 18:51:21.352417   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 18:51:21.361303   28466 kubeadm.go:392] StartCluster: {Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.123 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:51:21.361397   28466 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 18:51:21.361438   28466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 18:51:21.417408   28466 cri.go:89] found id: "28b9e54f0d805a89d2c497b59fe411e07dfc1fc8b1753e8e8ec7864fcae8bee6"
	I0924 18:51:21.417431   28466 cri.go:89] found id: "fd3e8519755a0afd9ecac3c02eeddea0f5694cde2c354f3b85316f1febea8706"
	I0924 18:51:21.417438   28466 cri.go:89] found id: "f1e1b3423dfcf07f1fd0623359b74b974b452181b48fc22186f094acd2244aed"
	I0924 18:51:21.417442   28466 cri.go:89] found id: "98174055d6b7079883d3f1908a9af8e00eba277292db6ff3671d4c35c115d3bf"
	I0924 18:51:21.417446   28466 cri.go:89] found id: "7c127d68bc74bd71c0a1e6e422d8d417299f860f8790701ec8a1dfa5af2abc18"
	I0924 18:51:21.417450   28466 cri.go:89] found id: "f14327e75ef8881e6cebafe23846d5ca345c156417d3b09147db4c76262b2936"
	I0924 18:51:21.417454   28466 cri.go:89] found id: "4411ef38af3f8e892cb7fb1aff0ea178734c6baf3f9dc0486dec91905b263da9"
	I0924 18:51:21.417458   28466 cri.go:89] found id: "7f9104d190f07befbd09ee466b024746ff7b2b398de183cd085ea33f265a2da8"
	I0924 18:51:21.417462   28466 cri.go:89] found id: "15accc82e018bbcea04a32d89aede0d281ce0186e37eea6844ffa844172f9e4e"
	I0924 18:51:21.417468   28466 cri.go:89] found id: "97afe98b678e4be38b759ea6cb446891cc336ed41021ba6bbb86be29a18b6dbd"
	I0924 18:51:21.417471   28466 cri.go:89] found id: "2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235"
	I0924 18:51:21.417475   28466 cri.go:89] found id: "7101ffaf02677078c4490807a7a38b8b8077a8323b00e1ef6c7c52dfdf7c323e"
	I0924 18:51:21.417479   28466 cri.go:89] found id: "75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f"
	I0924 18:51:21.417484   28466 cri.go:89] found id: "709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678"
	I0924 18:51:21.417488   28466 cri.go:89] found id: "9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9"
	I0924 18:51:21.417493   28466 cri.go:89] found id: "40f5664db9017d6a2a0453e30fcd1e13eb349124974c1e07a2d0ba8f50e4c50a"
	I0924 18:51:21.417496   28466 cri.go:89] found id: "e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc"
	I0924 18:51:21.417501   28466 cri.go:89] found id: "efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707"
	I0924 18:51:21.417505   28466 cri.go:89] found id: "5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8"
	I0924 18:51:21.417510   28466 cri.go:89] found id: "838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad"
	I0924 18:51:21.417515   28466 cri.go:89] found id: ""
	I0924 18:51:21.417558   28466 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.318041955Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204026318016519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d08d03a0-427f-4c58-bbb9-c58a9a12f9a0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.318759931Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6bbc0d29-b519-4e6b-96eb-4c7974245254 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.318858531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6bbc0d29-b519-4e6b-96eb-4c7974245254 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.320088837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:807ce3be93776792ee9c4decbf9887d6a889ccc6974169c8600d689f2003c93f,PodSandboxId:033dbea435ae9a4920575793e793aaa2c894e887aef5cc4a6a9d72e48a8de59d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203963236497252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33767687f698e19ef02f2b404b1b8efba384d05dab96467efec9c8782611ea69,PodSandboxId:4216938b3e9dcb5976611db8c6450fbd29b330049bee92fb733fc0da779bf623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203926247308722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195465ccb45fe6eed300efd9e005fbfcb794270def796f2c41b0d287a21789a4,PodSandboxId:0393248f7c5f7300522fd261b35bbe13202f7c73815c362e8b917f0819f7628b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203925241334408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1af26be5f5a40d4801406402fa3cecd62db912bb2008d5a26397e41d5a340d78,PodSandboxId:ef4a912012e54b91c2f83ede968d31644d7ac8c9bbd0ebb79d8a2cf530af7abd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203924139094029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd74353f8eea359b8dbe00fb463b865c5371eb2c3eeef0c52294af03c7ace88,PodSandboxId:033dbea435ae9a4920575793e793aaa2c894e887aef5cc4a6a9d72e48a8de59d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727203914233578983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0ae32a98e68a54e285407c0f37700d28905ca8558f6adc49a4c5666ecee37a7,PodSandboxId:fc3c654a42b04dc8370c133bc9a892986c87e753f4121844fc8e7d658edba1d7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203895900053645,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8416a5af8d7d99ec65aad9fafe08d700,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624ae9ed966d269bcdf339f5aed90069a73b9851203deb7e89f7ce9d3d9ce3e8,PodSandboxId:762235b133b6c8eb820c9ca527ac4c8bbbbfa06dd46bd5b951b1b235ba800326,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203880775128111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:28b9e54f0d805a89d2c497b59fe411e07dfc1fc8b1753e8e8ec7864fcae8bee6,PodSandboxId:8f9c03feb87b30b9a3fbe54c20ada29c1d80b8200c676570c0ba6165436e226c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203881069422086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3e8519755a0afd9ecac3c02eeddea0f5694cde2c354f3b85316f1febea8706,PodSandboxId:622f4021ab51a7a66e7e53ba2c52091cbdf2ad1702f661431f701c6f173c1ef3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203880989696351,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e1b3423dfcf07f1fd0623359b74b974b452181b48fc22186f094acd2244aed,PodSandboxId:3449d7cbbba1b5e47516443f6435a518be449418c4c4a153383cdb50157ed007,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727203880913154978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98174055d6b7079883d3f1908a9af8e00eba277292db6ff3671d4c35c115d3bf,PodSandboxId:b3a57b94e74b8b39b776cef85546f16519b41aa7df47f87b415a329d75b41bb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203880676225270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c127d68bc74bd71c0a1e6e422d8d417299f860f8790701ec8a1dfa5af2abc18,PodSandboxId:8ee0b5e39414e710ee1ac9bbb106b2b8da5971013b7670a0907a2dd204ff409b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203880648001895,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a4
74c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14327e75ef8881e6cebafe23846d5ca345c156417d3b09147db4c76262b2936,PodSandboxId:0393248f7c5f7300522fd261b35bbe13202f7c73815c362e8b917f0819f7628b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727203880582361698,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411ef38af3f8e892cb7fb1aff0ea178734c6baf3f9dc0486dec91905b263da9,PodSandboxId:4216938b3e9dcb5976611db8c6450fbd29b330049bee92fb733fc0da779bf623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727203880525163207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727203432777065073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727203297608236595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727203297571348809,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727203285606323442,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727203285407501690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727203273109965316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727203273059979529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6bbc0d29-b519-4e6b-96eb-4c7974245254 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.362974474Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df72212a-8a64-4377-b182-271a9c520b53 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.363051744Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df72212a-8a64-4377-b182-271a9c520b53 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.364163629Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c90aa6ce-6bd7-4b91-8027-ac8e887ca739 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.364559365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204026364539233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c90aa6ce-6bd7-4b91-8027-ac8e887ca739 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.364971863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62bdaff2-4607-4123-b911-0c4470533d04 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.365025085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62bdaff2-4607-4123-b911-0c4470533d04 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.365457208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:807ce3be93776792ee9c4decbf9887d6a889ccc6974169c8600d689f2003c93f,PodSandboxId:033dbea435ae9a4920575793e793aaa2c894e887aef5cc4a6a9d72e48a8de59d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203963236497252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33767687f698e19ef02f2b404b1b8efba384d05dab96467efec9c8782611ea69,PodSandboxId:4216938b3e9dcb5976611db8c6450fbd29b330049bee92fb733fc0da779bf623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203926247308722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195465ccb45fe6eed300efd9e005fbfcb794270def796f2c41b0d287a21789a4,PodSandboxId:0393248f7c5f7300522fd261b35bbe13202f7c73815c362e8b917f0819f7628b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203925241334408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1af26be5f5a40d4801406402fa3cecd62db912bb2008d5a26397e41d5a340d78,PodSandboxId:ef4a912012e54b91c2f83ede968d31644d7ac8c9bbd0ebb79d8a2cf530af7abd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203924139094029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd74353f8eea359b8dbe00fb463b865c5371eb2c3eeef0c52294af03c7ace88,PodSandboxId:033dbea435ae9a4920575793e793aaa2c894e887aef5cc4a6a9d72e48a8de59d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727203914233578983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0ae32a98e68a54e285407c0f37700d28905ca8558f6adc49a4c5666ecee37a7,PodSandboxId:fc3c654a42b04dc8370c133bc9a892986c87e753f4121844fc8e7d658edba1d7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203895900053645,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8416a5af8d7d99ec65aad9fafe08d700,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624ae9ed966d269bcdf339f5aed90069a73b9851203deb7e89f7ce9d3d9ce3e8,PodSandboxId:762235b133b6c8eb820c9ca527ac4c8bbbbfa06dd46bd5b951b1b235ba800326,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203880775128111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:28b9e54f0d805a89d2c497b59fe411e07dfc1fc8b1753e8e8ec7864fcae8bee6,PodSandboxId:8f9c03feb87b30b9a3fbe54c20ada29c1d80b8200c676570c0ba6165436e226c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203881069422086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3e8519755a0afd9ecac3c02eeddea0f5694cde2c354f3b85316f1febea8706,PodSandboxId:622f4021ab51a7a66e7e53ba2c52091cbdf2ad1702f661431f701c6f173c1ef3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203880989696351,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e1b3423dfcf07f1fd0623359b74b974b452181b48fc22186f094acd2244aed,PodSandboxId:3449d7cbbba1b5e47516443f6435a518be449418c4c4a153383cdb50157ed007,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727203880913154978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98174055d6b7079883d3f1908a9af8e00eba277292db6ff3671d4c35c115d3bf,PodSandboxId:b3a57b94e74b8b39b776cef85546f16519b41aa7df47f87b415a329d75b41bb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203880676225270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c127d68bc74bd71c0a1e6e422d8d417299f860f8790701ec8a1dfa5af2abc18,PodSandboxId:8ee0b5e39414e710ee1ac9bbb106b2b8da5971013b7670a0907a2dd204ff409b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203880648001895,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a4
74c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14327e75ef8881e6cebafe23846d5ca345c156417d3b09147db4c76262b2936,PodSandboxId:0393248f7c5f7300522fd261b35bbe13202f7c73815c362e8b917f0819f7628b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727203880582361698,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411ef38af3f8e892cb7fb1aff0ea178734c6baf3f9dc0486dec91905b263da9,PodSandboxId:4216938b3e9dcb5976611db8c6450fbd29b330049bee92fb733fc0da779bf623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727203880525163207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727203432777065073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727203297608236595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727203297571348809,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727203285606323442,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727203285407501690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727203273109965316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727203273059979529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62bdaff2-4607-4123-b911-0c4470533d04 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.403158414Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e1d6ebf-4b59-4ec0-8a78-359549e38773 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.403231214Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e1d6ebf-4b59-4ec0-8a78-359549e38773 name=/runtime.v1.RuntimeService/Version
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.404141947Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=40ddc3f3-454f-45c3-b030-a2579513b2d9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.404762739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204026404731126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40ddc3f3-454f-45c3-b030-a2579513b2d9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.405606321Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=245b026a-958b-4960-8fb4-49bcc8b3987a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.405661138Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=245b026a-958b-4960-8fb4-49bcc8b3987a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.406141333Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:807ce3be93776792ee9c4decbf9887d6a889ccc6974169c8600d689f2003c93f,PodSandboxId:033dbea435ae9a4920575793e793aaa2c894e887aef5cc4a6a9d72e48a8de59d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203963236497252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33767687f698e19ef02f2b404b1b8efba384d05dab96467efec9c8782611ea69,PodSandboxId:4216938b3e9dcb5976611db8c6450fbd29b330049bee92fb733fc0da779bf623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203926247308722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195465ccb45fe6eed300efd9e005fbfcb794270def796f2c41b0d287a21789a4,PodSandboxId:0393248f7c5f7300522fd261b35bbe13202f7c73815c362e8b917f0819f7628b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203925241334408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1af26be5f5a40d4801406402fa3cecd62db912bb2008d5a26397e41d5a340d78,PodSandboxId:ef4a912012e54b91c2f83ede968d31644d7ac8c9bbd0ebb79d8a2cf530af7abd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203924139094029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd74353f8eea359b8dbe00fb463b865c5371eb2c3eeef0c52294af03c7ace88,PodSandboxId:033dbea435ae9a4920575793e793aaa2c894e887aef5cc4a6a9d72e48a8de59d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727203914233578983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0ae32a98e68a54e285407c0f37700d28905ca8558f6adc49a4c5666ecee37a7,PodSandboxId:fc3c654a42b04dc8370c133bc9a892986c87e753f4121844fc8e7d658edba1d7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203895900053645,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8416a5af8d7d99ec65aad9fafe08d700,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624ae9ed966d269bcdf339f5aed90069a73b9851203deb7e89f7ce9d3d9ce3e8,PodSandboxId:762235b133b6c8eb820c9ca527ac4c8bbbbfa06dd46bd5b951b1b235ba800326,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203880775128111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:28b9e54f0d805a89d2c497b59fe411e07dfc1fc8b1753e8e8ec7864fcae8bee6,PodSandboxId:8f9c03feb87b30b9a3fbe54c20ada29c1d80b8200c676570c0ba6165436e226c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203881069422086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3e8519755a0afd9ecac3c02eeddea0f5694cde2c354f3b85316f1febea8706,PodSandboxId:622f4021ab51a7a66e7e53ba2c52091cbdf2ad1702f661431f701c6f173c1ef3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203880989696351,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e1b3423dfcf07f1fd0623359b74b974b452181b48fc22186f094acd2244aed,PodSandboxId:3449d7cbbba1b5e47516443f6435a518be449418c4c4a153383cdb50157ed007,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727203880913154978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98174055d6b7079883d3f1908a9af8e00eba277292db6ff3671d4c35c115d3bf,PodSandboxId:b3a57b94e74b8b39b776cef85546f16519b41aa7df47f87b415a329d75b41bb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203880676225270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c127d68bc74bd71c0a1e6e422d8d417299f860f8790701ec8a1dfa5af2abc18,PodSandboxId:8ee0b5e39414e710ee1ac9bbb106b2b8da5971013b7670a0907a2dd204ff409b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203880648001895,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a4
74c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14327e75ef8881e6cebafe23846d5ca345c156417d3b09147db4c76262b2936,PodSandboxId:0393248f7c5f7300522fd261b35bbe13202f7c73815c362e8b917f0819f7628b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727203880582361698,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411ef38af3f8e892cb7fb1aff0ea178734c6baf3f9dc0486dec91905b263da9,PodSandboxId:4216938b3e9dcb5976611db8c6450fbd29b330049bee92fb733fc0da779bf623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727203880525163207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727203432777065073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727203297608236595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727203297571348809,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727203285606323442,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727203285407501690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727203273109965316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727203273059979529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=245b026a-958b-4960-8fb4-49bcc8b3987a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.444108500Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5593dd86-c0fd-4ecb-b706-32e71e90cb0a name=/runtime.v1.RuntimeService/Version
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.444182119Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5593dd86-c0fd-4ecb-b706-32e71e90cb0a name=/runtime.v1.RuntimeService/Version
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.445054881Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=487d4261-91c5-450c-bed4-87aa4cac8adb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.445485291Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204026445463849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=487d4261-91c5-450c-bed4-87aa4cac8adb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.446128036Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df052d49-0e83-4695-88c9-6e6b16157431 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.446181449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df052d49-0e83-4695-88c9-6e6b16157431 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:53:46 ha-685475 crio[3617]: time="2024-09-24 18:53:46.446567781Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:807ce3be93776792ee9c4decbf9887d6a889ccc6974169c8600d689f2003c93f,PodSandboxId:033dbea435ae9a4920575793e793aaa2c894e887aef5cc4a6a9d72e48a8de59d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203963236497252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33767687f698e19ef02f2b404b1b8efba384d05dab96467efec9c8782611ea69,PodSandboxId:4216938b3e9dcb5976611db8c6450fbd29b330049bee92fb733fc0da779bf623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203926247308722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195465ccb45fe6eed300efd9e005fbfcb794270def796f2c41b0d287a21789a4,PodSandboxId:0393248f7c5f7300522fd261b35bbe13202f7c73815c362e8b917f0819f7628b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203925241334408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1af26be5f5a40d4801406402fa3cecd62db912bb2008d5a26397e41d5a340d78,PodSandboxId:ef4a912012e54b91c2f83ede968d31644d7ac8c9bbd0ebb79d8a2cf530af7abd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203924139094029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd74353f8eea359b8dbe00fb463b865c5371eb2c3eeef0c52294af03c7ace88,PodSandboxId:033dbea435ae9a4920575793e793aaa2c894e887aef5cc4a6a9d72e48a8de59d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727203914233578983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0ae32a98e68a54e285407c0f37700d28905ca8558f6adc49a4c5666ecee37a7,PodSandboxId:fc3c654a42b04dc8370c133bc9a892986c87e753f4121844fc8e7d658edba1d7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203895900053645,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8416a5af8d7d99ec65aad9fafe08d700,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624ae9ed966d269bcdf339f5aed90069a73b9851203deb7e89f7ce9d3d9ce3e8,PodSandboxId:762235b133b6c8eb820c9ca527ac4c8bbbbfa06dd46bd5b951b1b235ba800326,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203880775128111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:28b9e54f0d805a89d2c497b59fe411e07dfc1fc8b1753e8e8ec7864fcae8bee6,PodSandboxId:8f9c03feb87b30b9a3fbe54c20ada29c1d80b8200c676570c0ba6165436e226c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203881069422086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3e8519755a0afd9ecac3c02eeddea0f5694cde2c354f3b85316f1febea8706,PodSandboxId:622f4021ab51a7a66e7e53ba2c52091cbdf2ad1702f661431f701c6f173c1ef3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203880989696351,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e1b3423dfcf07f1fd0623359b74b974b452181b48fc22186f094acd2244aed,PodSandboxId:3449d7cbbba1b5e47516443f6435a518be449418c4c4a153383cdb50157ed007,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727203880913154978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98174055d6b7079883d3f1908a9af8e00eba277292db6ff3671d4c35c115d3bf,PodSandboxId:b3a57b94e74b8b39b776cef85546f16519b41aa7df47f87b415a329d75b41bb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203880676225270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c127d68bc74bd71c0a1e6e422d8d417299f860f8790701ec8a1dfa5af2abc18,PodSandboxId:8ee0b5e39414e710ee1ac9bbb106b2b8da5971013b7670a0907a2dd204ff409b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203880648001895,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a4
74c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14327e75ef8881e6cebafe23846d5ca345c156417d3b09147db4c76262b2936,PodSandboxId:0393248f7c5f7300522fd261b35bbe13202f7c73815c362e8b917f0819f7628b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727203880582361698,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411ef38af3f8e892cb7fb1aff0ea178734c6baf3f9dc0486dec91905b263da9,PodSandboxId:4216938b3e9dcb5976611db8c6450fbd29b330049bee92fb733fc0da779bf623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727203880525163207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727203432777065073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727203297608236595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727203297571348809,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727203285606323442,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727203285407501690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727203273109965316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727203273059979529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df052d49-0e83-4695-88c9-6e6b16157431 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	807ce3be93776       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   033dbea435ae9       storage-provisioner
	33767687f698e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   2                   4216938b3e9dc       kube-controller-manager-ha-685475
	195465ccb45fe       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            3                   0393248f7c5f7       kube-apiserver-ha-685475
	1af26be5f5a40       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   ef4a912012e54       busybox-7dff88458-hmkfk
	5fd74353f8eea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   033dbea435ae9       storage-provisioner
	d0ae32a98e68a       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   fc3c654a42b04       kube-vip-ha-685475
	28b9e54f0d805       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   8f9c03feb87b3       coredns-7c65d6cfc9-jf7wr
	fd3e8519755a0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   622f4021ab51a       coredns-7c65d6cfc9-fchhl
	f1e1b3423dfcf       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   3449d7cbbba1b       kindnet-ms6qb
	624ae9ed966d2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   762235b133b6c       kube-proxy-b8x2w
	98174055d6b70       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   b3a57b94e74b8       etcd-ha-685475
	7c127d68bc74b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   8ee0b5e39414e       kube-scheduler-ha-685475
	f14327e75ef88       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   0393248f7c5f7       kube-apiserver-ha-685475
	4411ef38af3f8       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   1                   4216938b3e9dc       kube-controller-manager-ha-685475
	9b86d48937d84       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago        Exited              busybox                   0                   2517ecd8d61cd       busybox-7dff88458-hmkfk
	2c7b4241a9158       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   c2c9f0a12f919       coredns-7c65d6cfc9-jf7wr
	75aac96a2239b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   9f53b2b4e4e29       coredns-7c65d6cfc9-fchhl
	709da73468c82       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      12 minutes ago       Exited              kindnet-cni               0                   6c65efd736505       kindnet-ms6qb
	9ea87ecceac1c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      12 minutes ago       Exited              kube-proxy                0                   bbb4cec818818       kube-proxy-b8x2w
	e62a02dab3075       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      12 minutes ago       Exited              kube-scheduler            0                   9ade6d826e125       kube-scheduler-ha-685475
	efe5b6f3ceb69       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      12 minutes ago       Exited              etcd                      0                   5fa1209cd75b8       etcd-ha-685475
	
	
	==> coredns [28b9e54f0d805a89d2c497b59fe411e07dfc1fc8b1753e8e8ec7864fcae8bee6] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:33012->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[115397641]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (24-Sep-2024 18:51:32.982) (total time: 10119ms):
	Trace[115397641]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:33012->10.96.0.1:443: read: connection reset by peer 10119ms (18:51:43.102)
	Trace[115397641]: [10.119575308s] [10.119575308s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:33046->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:33012->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:33046->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235] <==
	[INFO] 10.244.1.2:44949 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002196411s
	[INFO] 10.244.1.2:57646 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132442s
	[INFO] 10.244.1.2:45986 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001533759s
	[INFO] 10.244.1.2:56859 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159221s
	[INFO] 10.244.1.2:47730 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122802s
	[INFO] 10.244.2.2:49373 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174893s
	[INFO] 10.244.0.4:52492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008787s
	[INFO] 10.244.0.4:33570 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049583s
	[INFO] 10.244.0.4:35717 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000036153s
	[INFO] 10.244.1.2:39348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000262289s
	[INFO] 10.244.1.2:44144 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000216176s
	[INFO] 10.244.1.2:37532 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00017928s
	[INFO] 10.244.2.2:34536 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139562s
	[INFO] 10.244.0.4:43378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108735s
	[INFO] 10.244.0.4:50975 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139299s
	[INFO] 10.244.0.4:36798 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091581s
	[INFO] 10.244.1.2:55450 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136524s
	[INFO] 10.244.1.2:46887 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00019253s
	[INFO] 10.244.1.2:39275 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113225s
	[INFO] 10.244.1.2:44182 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097101s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	
	
	==> coredns [75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f] <==
	[INFO] 10.244.1.2:39503 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018791s
	[INFO] 10.244.1.2:56200 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000107364s
	[INFO] 10.244.1.2:50181 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000477328s
	[INFO] 10.244.2.2:48517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149349s
	[INFO] 10.244.2.2:37426 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161156s
	[INFO] 10.244.2.2:51780 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245454s
	[INFO] 10.244.0.4:37360 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00192766s
	[INFO] 10.244.0.4:49282 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067708s
	[INFO] 10.244.0.4:50475 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000049077s
	[INFO] 10.244.0.4:42734 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103381s
	[INFO] 10.244.1.2:34090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126966s
	[INFO] 10.244.1.2:49474 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199973s
	[INFO] 10.244.1.2:47488 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080517s
	[INFO] 10.244.2.2:58501 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129358s
	[INFO] 10.244.2.2:35831 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166592s
	[INFO] 10.244.2.2:46260 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105019s
	[INFO] 10.244.0.4:34512 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070631s
	[INFO] 10.244.1.2:40219 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095437s
	[INFO] 10.244.2.2:45584 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000263954s
	[INFO] 10.244.2.2:45346 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105804s
	[INFO] 10.244.2.2:33451 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099783s
	[INFO] 10.244.0.4:54263 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102026s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1730&timeout=9m57s&timeoutSeconds=597&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [fd3e8519755a0afd9ecac3c02eeddea0f5694cde2c354f3b85316f1febea8706] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:48120->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[233214872]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (24-Sep-2024 18:51:32.754) (total time: 10348ms):
	Trace[233214872]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:48120->10.96.0.1:443: read: connection reset by peer 10348ms (18:51:43.103)
	Trace[233214872]: [10.348226851s] [10.348226851s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:48120->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-685475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T18_41_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:41:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:53:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:52:05 +0000   Tue, 24 Sep 2024 18:41:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:52:05 +0000   Tue, 24 Sep 2024 18:41:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:52:05 +0000   Tue, 24 Sep 2024 18:41:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:52:05 +0000   Tue, 24 Sep 2024 18:41:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-685475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d6728db94ca4a90af6f3c76683b52c2
	  System UUID:                7d6728db-94ca-4a90-af6f-3c76683b52c2
	  Boot ID:                    d6338982-1afe-44d6-a104-48e80df984ae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hmkfk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  kube-system                 coredns-7c65d6cfc9-fchhl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-7c65d6cfc9-jf7wr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-685475                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-ms6qb                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-685475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-685475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-b8x2w                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-685475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-685475                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 103s                   kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node ha-685475 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node ha-685475 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node ha-685475 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-685475 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	  Normal   NodeNotReady             2m40s (x3 over 3m30s)  kubelet          Node ha-685475 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m27s (x2 over 3m27s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           106s                   node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	  Normal   RegisteredNode           96s                    node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	  Normal   RegisteredNode           42s                    node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	
	
	Name:               ha-685475-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T18_42_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:42:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:53:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:52:50 +0000   Tue, 24 Sep 2024 18:52:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:52:50 +0000   Tue, 24 Sep 2024 18:52:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:52:50 +0000   Tue, 24 Sep 2024 18:52:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:52:50 +0000   Tue, 24 Sep 2024 18:52:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-685475-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad56c26961cf4d94852f19122c4c499b
	  System UUID:                ad56c269-61cf-4d94-852f-19122c4c499b
	  Boot ID:                    020aa55b-e97a-436e-ae15-d221276dc925
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w6g8l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  kube-system                 etcd-ha-685475-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-pwvfj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-685475-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-685475-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-dlr8f                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-685475-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-685475-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 78s                  kube-proxy       
	  Normal  Starting                 11m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node ha-685475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node ha-685475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)    kubelet          Node ha-685475-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                  node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  NodeNotReady             8m10s                node-controller  Node ha-685475-m02 status is now: NodeNotReady
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node ha-685475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node ha-685475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x7 over 2m5s)  kubelet          Node ha-685475-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           106s                 node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  RegisteredNode           96s                  node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  RegisteredNode           42s                  node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	
	
	Name:               ha-685475-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T18_43_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:43:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:53:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:53:20 +0000   Tue, 24 Sep 2024 18:52:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:53:20 +0000   Tue, 24 Sep 2024 18:52:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:53:20 +0000   Tue, 24 Sep 2024 18:52:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:53:20 +0000   Tue, 24 Sep 2024 18:52:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-685475-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 666f55d24f014a7598addca9cb06654f
	  System UUID:                666f55d2-4f01-4a75-98ad-dca9cb06654f
	  Boot ID:                    3afb02ba-2c91-45a1-b041-2ebea8395fc1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gksmx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  kube-system                 etcd-ha-685475-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-7w5dn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-685475-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-685475-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-mzlfj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-685475-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-685475-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 42s                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-685475-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-685475-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-685475-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-685475-m03 event: Registered Node ha-685475-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-685475-m03 event: Registered Node ha-685475-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-685475-m03 event: Registered Node ha-685475-m03 in Controller
	  Normal   RegisteredNode           106s               node-controller  Node ha-685475-m03 event: Registered Node ha-685475-m03 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-685475-m03 event: Registered Node ha-685475-m03 in Controller
	  Normal   NodeNotReady             66s                node-controller  Node ha-685475-m03 status is now: NodeNotReady
	  Normal   Starting                 57s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  57s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  57s (x2 over 57s)  kubelet          Node ha-685475-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x2 over 57s)  kubelet          Node ha-685475-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x2 over 57s)  kubelet          Node ha-685475-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 57s                kubelet          Node ha-685475-m03 has been rebooted, boot id: 3afb02ba-2c91-45a1-b041-2ebea8395fc1
	  Normal   NodeReady                57s                kubelet          Node ha-685475-m03 status is now: NodeReady
	  Normal   RegisteredNode           42s                node-controller  Node ha-685475-m03 event: Registered Node ha-685475-m03 in Controller
	
	
	Name:               ha-685475-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T18_44_24_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:44:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:53:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:53:38 +0000   Tue, 24 Sep 2024 18:53:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:53:38 +0000   Tue, 24 Sep 2024 18:53:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:53:38 +0000   Tue, 24 Sep 2024 18:53:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:53:38 +0000   Tue, 24 Sep 2024 18:53:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    ha-685475-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5be0e3597a0f4236b1fa9e5e221d49dc
	  System UUID:                5be0e359-7a0f-4236-b1fa-9e5e221d49dc
	  Boot ID:                    5cca5d30-8e5e-4d33-9fe2-bd3febd4e1d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-n4nlv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m22s
	  kube-system                 kube-proxy-9m62z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5s                     kube-proxy       
	  Normal   Starting                 9m18s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m22s (x2 over 9m23s)  kubelet          Node ha-685475-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m22s (x2 over 9m23s)  kubelet          Node ha-685475-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m22s (x2 over 9m23s)  kubelet          Node ha-685475-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m20s                  node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal   RegisteredNode           9m19s                  node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal   RegisteredNode           9m19s                  node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal   NodeReady                9m4s                   kubelet          Node ha-685475-m04 status is now: NodeReady
	  Normal   RegisteredNode           106s                   node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal   RegisteredNode           96s                    node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal   NodeNotReady             66s                    node-controller  Node ha-685475-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           42s                    node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal   Starting                 9s                     kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)        kubelet          Node ha-685475-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)        kubelet          Node ha-685475-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)        kubelet          Node ha-685475-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9s                     kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s                     kubelet          Node ha-685475-m04 has been rebooted, boot id: 5cca5d30-8e5e-4d33-9fe2-bd3febd4e1d0
	  Normal   NodeNotReady             9s                     kubelet          Node ha-685475-m04 status is now: NodeNotReady
	  Normal   NodeReady                8s                     kubelet          Node ha-685475-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep24 18:41] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.056998] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056884] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.156659] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.148421] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.267579] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +3.782999] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +3.621822] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.062553] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.171108] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.082463] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.344664] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.133235] kauditd_printk_skb: 38 callbacks suppressed
	[Sep24 18:42] kauditd_printk_skb: 24 callbacks suppressed
	[Sep24 18:51] systemd-fstab-generator[3542]: Ignoring "noauto" option for root device
	[  +0.159556] systemd-fstab-generator[3554]: Ignoring "noauto" option for root device
	[  +0.181069] systemd-fstab-generator[3568]: Ignoring "noauto" option for root device
	[  +0.157125] systemd-fstab-generator[3580]: Ignoring "noauto" option for root device
	[  +0.280727] systemd-fstab-generator[3608]: Ignoring "noauto" option for root device
	[  +5.409899] systemd-fstab-generator[3712]: Ignoring "noauto" option for root device
	[  +0.088616] kauditd_printk_skb: 100 callbacks suppressed
	[  +8.405814] kauditd_printk_skb: 107 callbacks suppressed
	[  +7.581119] kauditd_printk_skb: 2 callbacks suppressed
	[Sep24 18:52] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [98174055d6b7079883d3f1908a9af8e00eba277292db6ff3671d4c35c115d3bf] <==
	{"level":"warn","ts":"2024-09-24T18:52:44.246367Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:52:44.334362Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:52:44.346619Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:52:44.447183Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bb39151d8411994b","from":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T18:52:45.472557Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.84:2380/version","remote-member-id":"f557fef8b50aff79","error":"Get \"https://192.168.39.84:2380/version\": dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T18:52:45.472685Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f557fef8b50aff79","error":"Get \"https://192.168.39.84:2380/version\": dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T18:52:46.728574Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f557fef8b50aff79","rtt":"0s","error":"dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T18:52:46.728604Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f557fef8b50aff79","rtt":"0s","error":"dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T18:52:49.474968Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.84:2380/version","remote-member-id":"f557fef8b50aff79","error":"Get \"https://192.168.39.84:2380/version\": dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T18:52:49.475138Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f557fef8b50aff79","error":"Get \"https://192.168.39.84:2380/version\": dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T18:52:51.729715Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f557fef8b50aff79","rtt":"0s","error":"dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T18:52:51.729941Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f557fef8b50aff79","rtt":"0s","error":"dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T18:52:53.477590Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.84:2380/version","remote-member-id":"f557fef8b50aff79","error":"Get \"https://192.168.39.84:2380/version\": dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T18:52:53.477657Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f557fef8b50aff79","error":"Get \"https://192.168.39.84:2380/version\": dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T18:52:56.729947Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f557fef8b50aff79","rtt":"0s","error":"dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T18:52:56.730101Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f557fef8b50aff79","rtt":"0s","error":"dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T18:52:57.479874Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.84:2380/version","remote-member-id":"f557fef8b50aff79","error":"Get \"https://192.168.39.84:2380/version\": dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T18:52:57.480020Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f557fef8b50aff79","error":"Get \"https://192.168.39.84:2380/version\": dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-24T18:52:58.157753Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:52:58.157856Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:52:58.162823Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:52:58.182429Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"bb39151d8411994b","to":"f557fef8b50aff79","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-24T18:52:58.184878Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:52:58.185454Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"bb39151d8411994b","to":"f557fef8b50aff79","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-24T18:52:58.185583Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	
	
	==> etcd [efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707] <==
	2024/09/24 18:49:42 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/24 18:49:42 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-24T18:49:42.403705Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.7:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T18:49:42.403759Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.7:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-24T18:49:42.403846Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"bb39151d8411994b","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-24T18:49:42.404014Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1f2f21eb2f90d05d"}
	{"level":"info","ts":"2024-09-24T18:49:42.404102Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1f2f21eb2f90d05d"}
	{"level":"info","ts":"2024-09-24T18:49:42.404178Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1f2f21eb2f90d05d"}
	{"level":"info","ts":"2024-09-24T18:49:42.404346Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d"}
	{"level":"info","ts":"2024-09-24T18:49:42.404417Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d"}
	{"level":"info","ts":"2024-09-24T18:49:42.404527Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d"}
	{"level":"info","ts":"2024-09-24T18:49:42.404585Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1f2f21eb2f90d05d"}
	{"level":"info","ts":"2024-09-24T18:49:42.404614Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:49:42.404679Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:49:42.404712Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:49:42.404857Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:49:42.404913Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:49:42.404961Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:49:42.404990Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:49:42.407773Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.7:2380"}
	{"level":"warn","ts":"2024-09-24T18:49:42.407884Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.823768217s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-24T18:49:42.407928Z","caller":"traceutil/trace.go:171","msg":"trace[58929591] range","detail":"{range_begin:; range_end:; }","duration":"1.823851418s","start":"2024-09-24T18:49:40.584068Z","end":"2024-09-24T18:49:42.407919Z","steps":["trace[58929591] 'agreement among raft nodes before linearized reading'  (duration: 1.823765319s)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:49:42.408001Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.7:2380"}
	{"level":"info","ts":"2024-09-24T18:49:42.408029Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-685475","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.7:2380"],"advertise-client-urls":["https://192.168.39.7:2379"]}
	{"level":"error","ts":"2024-09-24T18:49:42.408028Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 18:53:47 up 13 min,  0 users,  load average: 0.16, 0.35, 0.25
	Linux ha-685475 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678] <==
	I0924 18:49:06.555439       1 main.go:299] handling current node
	I0924 18:49:16.554897       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:49:16.554987       1 main.go:299] handling current node
	I0924 18:49:16.555013       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:49:16.555030       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:49:16.555161       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:49:16.555184       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:49:16.555240       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:49:16.555259       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:49:26.554865       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:49:26.554905       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:49:26.555089       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:49:26.555110       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:49:26.555160       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:49:26.555178       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:49:26.555224       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:49:26.555242       1 main.go:299] handling current node
	I0924 18:49:36.554936       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:49:36.555097       1 main.go:299] handling current node
	I0924 18:49:36.555133       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:49:36.555152       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:49:36.555314       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:49:36.555356       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:49:36.555452       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:49:36.555486       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f1e1b3423dfcf07f1fd0623359b74b974b452181b48fc22186f094acd2244aed] <==
	I0924 18:53:11.875943       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:53:21.870206       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:53:21.870302       1 main.go:299] handling current node
	I0924 18:53:21.870329       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:53:21.870348       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:53:21.870550       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:53:21.870584       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:53:21.870642       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:53:21.870660       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:53:31.877051       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:53:31.877095       1 main.go:299] handling current node
	I0924 18:53:31.877107       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:53:31.877114       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:53:31.877287       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:53:31.877321       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:53:31.877384       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:53:31.877400       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:53:41.870203       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:53:41.870365       1 main.go:299] handling current node
	I0924 18:53:41.870425       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:53:41.870463       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:53:41.870620       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:53:41.870662       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:53:41.870767       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:53:41.870792       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [195465ccb45fe6eed300efd9e005fbfcb794270def796f2c41b0d287a21789a4] <==
	I0924 18:52:07.439120       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0924 18:52:07.439283       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0924 18:52:07.524350       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0924 18:52:07.524436       1 policy_source.go:224] refreshing policies
	I0924 18:52:07.526308       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0924 18:52:07.526526       1 aggregator.go:171] initial CRD sync complete...
	I0924 18:52:07.526575       1 autoregister_controller.go:144] Starting autoregister controller
	I0924 18:52:07.526931       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0924 18:52:07.541498       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0924 18:52:07.558393       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0924 18:52:07.598131       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.17 192.168.39.84]
	I0924 18:52:07.600178       1 controller.go:615] quota admission added evaluator for: endpoints
	I0924 18:52:07.612550       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 18:52:07.614064       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0924 18:52:07.619004       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0924 18:52:07.622567       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0924 18:52:07.626230       1 shared_informer.go:320] Caches are synced for configmaps
	I0924 18:52:07.627304       1 cache.go:39] Caches are synced for autoregister controller
	I0924 18:52:07.631339       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0924 18:52:07.631415       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0924 18:52:07.632167       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0924 18:52:07.632465       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0924 18:52:07.641168       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0924 18:52:08.430474       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0924 18:52:08.730585       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.17 192.168.39.7 192.168.39.84]
	
	
	==> kube-apiserver [f14327e75ef8881e6cebafe23846d5ca345c156417d3b09147db4c76262b2936] <==
	I0924 18:51:21.054783       1 options.go:228] external host was not specified, using 192.168.39.7
	I0924 18:51:21.064779       1 server.go:142] Version: v1.31.1
	I0924 18:51:21.064849       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:51:21.784869       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0924 18:51:21.805644       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0924 18:51:21.806182       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0924 18:51:21.806213       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0924 18:51:21.806395       1 instance.go:232] Using reconciler: lease
	W0924 18:51:41.778875       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0924 18:51:41.784381       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0924 18:51:41.807272       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0924 18:51:41.807364       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [33767687f698e19ef02f2b404b1b8efba384d05dab96467efec9c8782611ea69] <==
	I0924 18:52:40.685099       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m03"
	I0924 18:52:40.715410       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:52:40.718671       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m03"
	I0924 18:52:40.825208       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.776074ms"
	I0924 18:52:40.828732       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.364µs"
	I0924 18:52:41.080592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m03"
	I0924 18:52:45.932160       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:52:49.849137       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m03"
	I0924 18:52:49.862583       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m03"
	I0924 18:52:50.007994       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m02"
	I0924 18:52:50.717417       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.883µs"
	I0924 18:52:50.897519       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m03"
	I0924 18:52:51.169471       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:53:04.767719       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:53:04.862357       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:53:07.169203       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.4866ms"
	I0924 18:53:07.169355       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.811µs"
	I0924 18:53:20.313392       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m03"
	I0924 18:53:37.331685       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:53:37.352016       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:53:37.365302       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:53:38.353641       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-685475-m04"
	I0924 18:53:38.353764       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:53:38.367518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:53:39.783907       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	
	
	==> kube-controller-manager [4411ef38af3f8e892cb7fb1aff0ea178734c6baf3f9dc0486dec91905b263da9] <==
	I0924 18:51:22.413437       1 serving.go:386] Generated self-signed cert in-memory
	I0924 18:51:22.649049       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0924 18:51:22.649144       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:51:22.650842       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0924 18:51:22.650990       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0924 18:51:22.651462       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0924 18:51:22.651556       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0924 18:51:42.812333       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.7:8443/healthz\": dial tcp 192.168.39.7:8443: connect: connection refused"
	
	
	==> kube-proxy [624ae9ed966d269bcdf339f5aed90069a73b9851203deb7e89f7ce9d3d9ce3e8] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 18:51:23.518303       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-685475\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 18:51:26.591088       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-685475\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 18:51:29.662200       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-685475\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 18:51:35.807300       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-685475\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 18:51:45.022640       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-685475\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 18:52:03.455843       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-685475\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0924 18:52:03.455960       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0924 18:52:03.456032       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 18:52:03.491361       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 18:52:03.491449       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 18:52:03.491485       1 server_linux.go:169] "Using iptables Proxier"
	I0924 18:52:03.494272       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 18:52:03.494629       1 server.go:483] "Version info" version="v1.31.1"
	I0924 18:52:03.494850       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:52:03.496326       1 config.go:199] "Starting service config controller"
	I0924 18:52:03.496372       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 18:52:03.496441       1 config.go:105] "Starting endpoint slice config controller"
	I0924 18:52:03.496475       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 18:52:03.496456       1 config.go:328] "Starting node config controller"
	I0924 18:52:03.497267       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 18:52:05.797169       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 18:52:05.797308       1 shared_informer.go:320] Caches are synced for node config
	I0924 18:52:05.797339       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9] <==
	W0924 18:48:19.838522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:48:19.838576       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0924 18:48:19.838529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:48:28.030159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:48:28.030301       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:48:28.030517       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	W0924 18:48:28.030580       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:48:28.030634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0924 18:48:28.030636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:48:37.632170       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:48:37.632418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:48:37.632514       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:48:37.632724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:48:40.703562       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:48:40.703713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:48:59.135603       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:48:59.135658       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:49:02.206735       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:49:02.206793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:49:05.278907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:49:05.279023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:49:29.855020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:49:29.855121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:49:42.142362       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:49:42.142707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [7c127d68bc74bd71c0a1e6e422d8d417299f860f8790701ec8a1dfa5af2abc18] <==
	W0924 18:51:58.734328       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.7:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:51:58.734489       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.7:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:51:59.482389       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.7:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:51:59.482467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.7:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:00.305853       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.7:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:00.306300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.7:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:00.317411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.7:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:00.317560       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.7:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:00.704339       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.7:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:00.704421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.7:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:00.740626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.7:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:00.740711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.7:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:01.075553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.7:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:01.075680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.7:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:01.499224       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.7:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:01.499362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.7:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:02.913549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.7:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:02.913660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.7:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:02.993273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.7:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:02.993411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.7:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:03.899703       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.7:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:03.899795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.7:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:04.940409       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.7:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:04.940545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.7:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	I0924 18:52:14.218620       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc] <==
	E0924 18:41:17.231072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.384731       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 18:41:17.384781       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 18:41:17.385753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 18:41:17.385816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0924 18:41:20.277859       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0924 18:43:50.159728       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w6g8l\": pod busybox-7dff88458-w6g8l is already assigned to node \"ha-685475-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-w6g8l" node="ha-685475-m02"
	E0924 18:43:50.159906       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w6g8l\": pod busybox-7dff88458-w6g8l is already assigned to node \"ha-685475-m02\"" pod="default/busybox-7dff88458-w6g8l"
	E0924 18:43:50.160616       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hmkfk\": pod busybox-7dff88458-hmkfk is already assigned to node \"ha-685475\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hmkfk" node="ha-685475"
	E0924 18:43:50.160683       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hmkfk\": pod busybox-7dff88458-hmkfk is already assigned to node \"ha-685475\"" pod="default/busybox-7dff88458-hmkfk"
	E0924 18:44:24.296261       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9m62z\": pod kube-proxy-9m62z is already assigned to node \"ha-685475-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9m62z" node="ha-685475-m04"
	E0924 18:44:24.296334       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d172ae09-1eb7-4e5d-a5a1-e865b926b6eb(kube-system/kube-proxy-9m62z) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-9m62z"
	E0924 18:44:24.296350       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9m62z\": pod kube-proxy-9m62z is already assigned to node \"ha-685475-m04\"" pod="kube-system/kube-proxy-9m62z"
	I0924 18:44:24.296367       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9m62z" node="ha-685475-m04"
	E0924 18:49:33.050251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0924 18:49:36.566629       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0924 18:49:36.907142       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0924 18:49:37.791916       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0924 18:49:37.973674       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0924 18:49:38.551229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0924 18:49:40.418969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0924 18:49:40.596778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0924 18:49:42.179148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0924 18:49:42.271632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0924 18:49:42.334956       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 24 18:52:32 ha-685475 kubelet[1306]: I0924 18:52:32.221316    1306 scope.go:117] "RemoveContainer" containerID="5fd74353f8eea359b8dbe00fb463b865c5371eb2c3eeef0c52294af03c7ace88"
	Sep 24 18:52:32 ha-685475 kubelet[1306]: E0924 18:52:32.221560    1306 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e0f5497a-ae6d-4051-b1bc-c84c91d0fd12)\"" pod="kube-system/storage-provisioner" podUID="e0f5497a-ae6d-4051-b1bc-c84c91d0fd12"
	Sep 24 18:52:39 ha-685475 kubelet[1306]: E0924 18:52:39.364423    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203959364088780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:52:39 ha-685475 kubelet[1306]: E0924 18:52:39.364776    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203959364088780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:52:43 ha-685475 kubelet[1306]: I0924 18:52:43.222333    1306 scope.go:117] "RemoveContainer" containerID="5fd74353f8eea359b8dbe00fb463b865c5371eb2c3eeef0c52294af03c7ace88"
	Sep 24 18:52:49 ha-685475 kubelet[1306]: E0924 18:52:49.367003    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203969366632125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:52:49 ha-685475 kubelet[1306]: E0924 18:52:49.367042    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203969366632125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:52:57 ha-685475 kubelet[1306]: I0924 18:52:57.222105    1306 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-685475" podUID="ad2ed915-5276-4ba2-b097-df9074e8c2ef"
	Sep 24 18:52:57 ha-685475 kubelet[1306]: I0924 18:52:57.241084    1306 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-685475"
	Sep 24 18:52:59 ha-685475 kubelet[1306]: I0924 18:52:59.239773    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-685475" podStartSLOduration=2.239748626 podStartE2EDuration="2.239748626s" podCreationTimestamp="2024-09-24 18:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-24 18:52:59.239550571 +0000 UTC m=+700.181274416" watchObservedRunningTime="2024-09-24 18:52:59.239748626 +0000 UTC m=+700.181472487"
	Sep 24 18:52:59 ha-685475 kubelet[1306]: E0924 18:52:59.369147    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203979368397878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:52:59 ha-685475 kubelet[1306]: E0924 18:52:59.369643    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203979368397878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:53:09 ha-685475 kubelet[1306]: E0924 18:53:09.372500    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203989372245290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:53:09 ha-685475 kubelet[1306]: E0924 18:53:09.372571    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203989372245290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:53:19 ha-685475 kubelet[1306]: E0924 18:53:19.241640    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 18:53:19 ha-685475 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 18:53:19 ha-685475 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 18:53:19 ha-685475 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 18:53:19 ha-685475 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 18:53:19 ha-685475 kubelet[1306]: E0924 18:53:19.373691    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203999373471990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:53:19 ha-685475 kubelet[1306]: E0924 18:53:19.373714    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727203999373471990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:53:29 ha-685475 kubelet[1306]: E0924 18:53:29.375061    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204009374710528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:53:29 ha-685475 kubelet[1306]: E0924 18:53:29.375095    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204009374710528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:53:39 ha-685475 kubelet[1306]: E0924 18:53:39.376790    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204019376525130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:53:39 ha-685475 kubelet[1306]: E0924 18:53:39.376837    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204019376525130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 18:53:46.050158   29815 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19700-3751/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-685475 -n ha-685475
helpers_test.go:261: (dbg) Run:  kubectl --context ha-685475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (368.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 stop -v=7 --alsologtostderr
E0924 18:54:49.789853   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-685475 stop -v=7 --alsologtostderr: exit status 82 (2m0.469009388s)

                                                
                                                
-- stdout --
	* Stopping node "ha-685475-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 18:54:04.986278   30254 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:54:04.986382   30254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:54:04.986391   30254 out.go:358] Setting ErrFile to fd 2...
	I0924 18:54:04.986395   30254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:54:04.986599   30254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 18:54:04.986804   30254 out.go:352] Setting JSON to false
	I0924 18:54:04.986916   30254 mustload.go:65] Loading cluster: ha-685475
	I0924 18:54:04.987290   30254 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:54:04.987398   30254 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:54:04.987583   30254 mustload.go:65] Loading cluster: ha-685475
	I0924 18:54:04.987731   30254 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:54:04.987763   30254 stop.go:39] StopHost: ha-685475-m04
	I0924 18:54:04.988163   30254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:54:04.988218   30254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:54:05.002771   30254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39471
	I0924 18:54:05.003246   30254 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:54:05.003814   30254 main.go:141] libmachine: Using API Version  1
	I0924 18:54:05.003838   30254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:54:05.004127   30254 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:54:05.006565   30254 out.go:177] * Stopping node "ha-685475-m04"  ...
	I0924 18:54:05.008300   30254 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0924 18:54:05.008324   30254 main.go:141] libmachine: (ha-685475-m04) Calling .DriverName
	I0924 18:54:05.008527   30254 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0924 18:54:05.008554   30254 main.go:141] libmachine: (ha-685475-m04) Calling .GetSSHHostname
	I0924 18:54:05.011061   30254 main.go:141] libmachine: (ha-685475-m04) DBG | domain ha-685475-m04 has defined MAC address 52:54:00:46:d7:0c in network mk-ha-685475
	I0924 18:54:05.011479   30254 main.go:141] libmachine: (ha-685475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:d7:0c", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:53:31 +0000 UTC Type:0 Mac:52:54:00:46:d7:0c Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-685475-m04 Clientid:01:52:54:00:46:d7:0c}
	I0924 18:54:05.011500   30254 main.go:141] libmachine: (ha-685475-m04) DBG | domain ha-685475-m04 has defined IP address 192.168.39.123 and MAC address 52:54:00:46:d7:0c in network mk-ha-685475
	I0924 18:54:05.011637   30254 main.go:141] libmachine: (ha-685475-m04) Calling .GetSSHPort
	I0924 18:54:05.011772   30254 main.go:141] libmachine: (ha-685475-m04) Calling .GetSSHKeyPath
	I0924 18:54:05.011907   30254 main.go:141] libmachine: (ha-685475-m04) Calling .GetSSHUsername
	I0924 18:54:05.012003   30254 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475-m04/id_rsa Username:docker}
	I0924 18:54:05.100442   30254 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0924 18:54:05.152758   30254 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0924 18:54:05.205324   30254 main.go:141] libmachine: Stopping "ha-685475-m04"...
	I0924 18:54:05.205357   30254 main.go:141] libmachine: (ha-685475-m04) Calling .GetState
	I0924 18:54:05.206978   30254 main.go:141] libmachine: (ha-685475-m04) Calling .Stop
	I0924 18:54:05.210246   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 0/120
	I0924 18:54:06.211576   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 1/120
	I0924 18:54:07.213392   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 2/120
	I0924 18:54:08.215369   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 3/120
	I0924 18:54:09.216982   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 4/120
	I0924 18:54:10.218985   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 5/120
	I0924 18:54:11.220428   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 6/120
	I0924 18:54:12.221893   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 7/120
	I0924 18:54:13.223069   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 8/120
	I0924 18:54:14.225291   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 9/120
	I0924 18:54:15.227527   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 10/120
	I0924 18:54:16.229419   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 11/120
	I0924 18:54:17.230934   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 12/120
	I0924 18:54:18.232164   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 13/120
	I0924 18:54:19.233973   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 14/120
	I0924 18:54:20.235771   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 15/120
	I0924 18:54:21.237804   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 16/120
	I0924 18:54:22.239318   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 17/120
	I0924 18:54:23.241512   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 18/120
	I0924 18:54:24.243056   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 19/120
	I0924 18:54:25.245315   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 20/120
	I0924 18:54:26.246531   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 21/120
	I0924 18:54:27.247939   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 22/120
	I0924 18:54:28.249321   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 23/120
	I0924 18:54:29.250936   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 24/120
	I0924 18:54:30.252607   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 25/120
	I0924 18:54:31.254044   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 26/120
	I0924 18:54:32.255525   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 27/120
	I0924 18:54:33.257325   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 28/120
	I0924 18:54:34.259406   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 29/120
	I0924 18:54:35.261564   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 30/120
	I0924 18:54:36.262800   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 31/120
	I0924 18:54:37.264099   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 32/120
	I0924 18:54:38.265464   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 33/120
	I0924 18:54:39.267470   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 34/120
	I0924 18:54:40.269371   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 35/120
	I0924 18:54:41.270659   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 36/120
	I0924 18:54:42.271957   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 37/120
	I0924 18:54:43.274329   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 38/120
	I0924 18:54:44.276165   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 39/120
	I0924 18:54:45.278369   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 40/120
	I0924 18:54:46.279742   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 41/120
	I0924 18:54:47.281212   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 42/120
	I0924 18:54:48.282657   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 43/120
	I0924 18:54:49.284010   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 44/120
	I0924 18:54:50.285895   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 45/120
	I0924 18:54:51.287118   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 46/120
	I0924 18:54:52.288464   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 47/120
	I0924 18:54:53.290558   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 48/120
	I0924 18:54:54.291814   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 49/120
	I0924 18:54:55.293970   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 50/120
	I0924 18:54:56.295137   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 51/120
	I0924 18:54:57.297279   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 52/120
	I0924 18:54:58.298549   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 53/120
	I0924 18:54:59.299707   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 54/120
	I0924 18:55:00.301322   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 55/120
	I0924 18:55:01.302748   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 56/120
	I0924 18:55:02.304825   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 57/120
	I0924 18:55:03.306095   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 58/120
	I0924 18:55:04.307418   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 59/120
	I0924 18:55:05.309410   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 60/120
	I0924 18:55:06.310929   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 61/120
	I0924 18:55:07.312021   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 62/120
	I0924 18:55:08.313289   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 63/120
	I0924 18:55:09.314354   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 64/120
	I0924 18:55:10.316292   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 65/120
	I0924 18:55:11.317666   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 66/120
	I0924 18:55:12.319059   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 67/120
	I0924 18:55:13.321514   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 68/120
	I0924 18:55:14.322765   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 69/120
	I0924 18:55:15.324345   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 70/120
	I0924 18:55:16.325533   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 71/120
	I0924 18:55:17.326911   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 72/120
	I0924 18:55:18.328809   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 73/120
	I0924 18:55:19.330197   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 74/120
	I0924 18:55:20.332231   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 75/120
	I0924 18:55:21.333551   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 76/120
	I0924 18:55:22.334904   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 77/120
	I0924 18:55:23.336304   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 78/120
	I0924 18:55:24.337498   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 79/120
	I0924 18:55:25.339550   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 80/120
	I0924 18:55:26.341255   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 81/120
	I0924 18:55:27.342529   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 82/120
	I0924 18:55:28.343693   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 83/120
	I0924 18:55:29.345297   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 84/120
	I0924 18:55:30.347166   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 85/120
	I0924 18:55:31.349239   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 86/120
	I0924 18:55:32.350817   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 87/120
	I0924 18:55:33.353006   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 88/120
	I0924 18:55:34.354949   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 89/120
	I0924 18:55:35.356505   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 90/120
	I0924 18:55:36.357910   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 91/120
	I0924 18:55:37.359684   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 92/120
	I0924 18:55:38.361242   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 93/120
	I0924 18:55:39.362499   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 94/120
	I0924 18:55:40.364470   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 95/120
	I0924 18:55:41.365724   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 96/120
	I0924 18:55:42.367019   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 97/120
	I0924 18:55:43.369419   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 98/120
	I0924 18:55:44.371133   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 99/120
	I0924 18:55:45.373278   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 100/120
	I0924 18:55:46.374672   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 101/120
	I0924 18:55:47.376169   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 102/120
	I0924 18:55:48.377544   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 103/120
	I0924 18:55:49.379211   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 104/120
	I0924 18:55:50.381030   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 105/120
	I0924 18:55:51.383065   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 106/120
	I0924 18:55:52.385117   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 107/120
	I0924 18:55:53.386438   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 108/120
	I0924 18:55:54.387694   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 109/120
	I0924 18:55:55.389820   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 110/120
	I0924 18:55:56.391063   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 111/120
	I0924 18:55:57.392348   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 112/120
	I0924 18:55:58.394000   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 113/120
	I0924 18:55:59.395487   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 114/120
	I0924 18:56:00.397488   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 115/120
	I0924 18:56:01.399676   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 116/120
	I0924 18:56:02.401270   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 117/120
	I0924 18:56:03.402601   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 118/120
	I0924 18:56:04.404069   30254 main.go:141] libmachine: (ha-685475-m04) Waiting for machine to stop 119/120
	I0924 18:56:05.405083   30254 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0924 18:56:05.405149   30254 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0924 18:56:05.406847   30254 out.go:201] 
	W0924 18:56:05.408187   30254 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0924 18:56:05.408203   30254 out.go:270] * 
	* 
	W0924 18:56:05.410311   30254 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 18:56:05.411552   30254 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-685475 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Done: out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr: (18.86188221s)
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr": 
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr": 
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-685475 -n ha-685475
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-685475 logs -n 25: (1.506799241s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-685475 ssh -n ha-685475-m02 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /home/docker/cp-test_ha-685475-m03_ha-685475-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m04:/home/docker/cp-test_ha-685475-m03_ha-685475-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m04 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /home/docker/cp-test_ha-685475-m03_ha-685475-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-685475 cp testdata/cp-test.txt                                               | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile399016322/001/cp-test_ha-685475-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:44 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475:/home/docker/cp-test_ha-685475-m04_ha-685475.txt                      |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475 sudo cat                                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-685475-m04_ha-685475.txt                                |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m02:/home/docker/cp-test_ha-685475-m04_ha-685475-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m02 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-685475-m04_ha-685475-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m03:/home/docker/cp-test_ha-685475-m04_ha-685475-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n                                                                | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | ha-685475-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-685475 ssh -n ha-685475-m03 sudo cat                                         | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC | 24 Sep 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-685475-m04_ha-685475-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-685475 node stop m02 -v=7                                                    | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:45 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-685475 node start m02 -v=7                                                   | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:47 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-685475 -v=7                                                          | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:47 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-685475 -v=7                                                               | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:47 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-685475 --wait=true -v=7                                                   | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:49 UTC | 24 Sep 24 18:53 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-685475                                                               | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:53 UTC |                     |
	| node    | ha-685475 node delete m03 -v=7                                                  | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:53 UTC | 24 Sep 24 18:54 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-685475 stop -v=7                                                             | ha-685475 | jenkins | v1.34.0 | 24 Sep 24 18:54 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:49:41
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:49:41.416395   28466 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:49:41.416639   28466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:49:41.416647   28466 out.go:358] Setting ErrFile to fd 2...
	I0924 18:49:41.416652   28466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:49:41.416833   28466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 18:49:41.417337   28466 out.go:352] Setting JSON to false
	I0924 18:49:41.418248   28466 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1932,"bootTime":1727201849,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:49:41.418338   28466 start.go:139] virtualization: kvm guest
	I0924 18:49:41.420741   28466 out.go:177] * [ha-685475] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 18:49:41.422252   28466 notify.go:220] Checking for updates...
	I0924 18:49:41.422298   28466 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:49:41.423695   28466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:49:41.425001   28466 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:49:41.426516   28466 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:49:41.427970   28466 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 18:49:41.429351   28466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:49:41.431275   28466 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:49:41.431373   28466 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:49:41.431805   28466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:49:41.431860   28466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:49:41.447208   28466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44479
	I0924 18:49:41.447693   28466 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:49:41.448258   28466 main.go:141] libmachine: Using API Version  1
	I0924 18:49:41.448282   28466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:49:41.448638   28466 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:49:41.448797   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:49:41.485733   28466 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 18:49:41.486976   28466 start.go:297] selected driver: kvm2
	I0924 18:49:41.486994   28466 start.go:901] validating driver "kvm2" against &{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.123 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:49:41.487112   28466 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:49:41.487450   28466 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:49:41.487529   28466 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 18:49:41.503039   28466 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 18:49:41.503725   28466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:49:41.503754   28466 cni.go:84] Creating CNI manager for ""
	I0924 18:49:41.503780   28466 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0924 18:49:41.503824   28466 start.go:340] cluster config:
	{Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.123 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:fal
se ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:49:41.503959   28466 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:49:41.505754   28466 out.go:177] * Starting "ha-685475" primary control-plane node in "ha-685475" cluster
	I0924 18:49:41.507135   28466 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:49:41.507191   28466 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 18:49:41.507203   28466 cache.go:56] Caching tarball of preloaded images
	I0924 18:49:41.507285   28466 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 18:49:41.507297   28466 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 18:49:41.507422   28466 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/config.json ...
	I0924 18:49:41.507688   28466 start.go:360] acquireMachinesLock for ha-685475: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 18:49:41.507737   28466 start.go:364] duration metric: took 29.748µs to acquireMachinesLock for "ha-685475"
	I0924 18:49:41.507757   28466 start.go:96] Skipping create...Using existing machine configuration
	I0924 18:49:41.507766   28466 fix.go:54] fixHost starting: 
	I0924 18:49:41.508061   28466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:49:41.508099   28466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:49:41.522542   28466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41175
	I0924 18:49:41.522936   28466 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:49:41.523401   28466 main.go:141] libmachine: Using API Version  1
	I0924 18:49:41.523425   28466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:49:41.523886   28466 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:49:41.524081   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:49:41.524255   28466 main.go:141] libmachine: (ha-685475) Calling .GetState
	I0924 18:49:41.525826   28466 fix.go:112] recreateIfNeeded on ha-685475: state=Running err=<nil>
	W0924 18:49:41.525859   28466 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 18:49:41.529240   28466 out.go:177] * Updating the running kvm2 "ha-685475" VM ...
	I0924 18:49:41.530712   28466 machine.go:93] provisionDockerMachine start ...
	I0924 18:49:41.530738   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:49:41.530974   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:49:41.533165   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.533580   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:49:41.533605   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.533782   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:49:41.533967   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:41.534112   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:41.534223   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:49:41.534332   28466 main.go:141] libmachine: Using SSH client type: native
	I0924 18:49:41.534517   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:49:41.534529   28466 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 18:49:41.643232   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685475
	
	I0924 18:49:41.643261   28466 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:49:41.643481   28466 buildroot.go:166] provisioning hostname "ha-685475"
	I0924 18:49:41.643503   28466 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:49:41.643646   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:49:41.646212   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.646505   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:49:41.646531   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.646762   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:49:41.646980   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:41.647132   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:41.647272   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:49:41.647448   28466 main.go:141] libmachine: Using SSH client type: native
	I0924 18:49:41.647651   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:49:41.647664   28466 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-685475 && echo "ha-685475" | sudo tee /etc/hostname
	I0924 18:49:41.779674   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685475
	
	I0924 18:49:41.779708   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:49:41.782468   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.782847   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:49:41.782871   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.783056   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:49:41.783235   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:41.783401   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:41.783498   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:49:41.783622   28466 main.go:141] libmachine: Using SSH client type: native
	I0924 18:49:41.783822   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:49:41.783838   28466 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-685475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-685475/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-685475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 18:49:41.891295   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 18:49:41.891327   28466 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 18:49:41.891373   28466 buildroot.go:174] setting up certificates
	I0924 18:49:41.891383   28466 provision.go:84] configureAuth start
	I0924 18:49:41.891396   28466 main.go:141] libmachine: (ha-685475) Calling .GetMachineName
	I0924 18:49:41.891628   28466 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:49:41.894270   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.894622   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:49:41.894649   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.894778   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:49:41.896936   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.897279   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:49:41.897300   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:41.897473   28466 provision.go:143] copyHostCerts
	I0924 18:49:41.897496   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:49:41.897531   28466 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 18:49:41.897543   28466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 18:49:41.897622   28466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 18:49:41.897720   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:49:41.897745   28466 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 18:49:41.897755   28466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 18:49:41.897789   28466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 18:49:41.897849   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:49:41.897869   28466 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 18:49:41.897887   28466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 18:49:41.897925   28466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 18:49:41.897989   28466 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.ha-685475 san=[127.0.0.1 192.168.39.7 ha-685475 localhost minikube]
	I0924 18:49:42.055432   28466 provision.go:177] copyRemoteCerts
	I0924 18:49:42.055488   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 18:49:42.055508   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:49:42.057935   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:42.058260   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:49:42.058288   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:42.058448   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:49:42.058639   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:42.058797   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:49:42.058931   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:49:42.144208   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 18:49:42.144266   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 18:49:42.169405   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 18:49:42.169472   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0924 18:49:42.192469   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 18:49:42.192528   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 18:49:42.217239   28466 provision.go:87] duration metric: took 325.844928ms to configureAuth
	I0924 18:49:42.217266   28466 buildroot.go:189] setting minikube options for container-runtime
	I0924 18:49:42.217508   28466 config.go:182] Loaded profile config "ha-685475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:49:42.217585   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:49:42.220321   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:42.220734   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:49:42.220759   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:49:42.220964   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:49:42.221168   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:42.221408   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:49:42.221555   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:49:42.221699   28466 main.go:141] libmachine: Using SSH client type: native
	I0924 18:49:42.221901   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:49:42.221921   28466 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 18:51:12.912067   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 18:51:12.912095   28466 machine.go:96] duration metric: took 1m31.381364631s to provisionDockerMachine
	I0924 18:51:12.912107   28466 start.go:293] postStartSetup for "ha-685475" (driver="kvm2")
	I0924 18:51:12.912117   28466 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:51:12.912132   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:51:12.912403   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:51:12.912427   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:51:12.915611   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:12.916024   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:51:12.916049   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:12.916219   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:51:12.916390   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:51:12.916547   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:51:12.916627   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:51:12.996794   28466 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 18:51:13.001014   28466 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 18:51:13.001036   28466 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 18:51:13.001100   28466 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 18:51:13.001176   28466 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 18:51:13.001185   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /etc/ssl/certs/109492.pem
	I0924 18:51:13.001271   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 18:51:13.010156   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:51:13.032949   28466 start.go:296] duration metric: took 120.828545ms for postStartSetup
	I0924 18:51:13.032997   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:51:13.033245   28466 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0924 18:51:13.033275   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:51:13.035773   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.036149   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:51:13.036176   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.036325   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:51:13.036515   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:51:13.036714   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:51:13.036858   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	W0924 18:51:13.116202   28466 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0924 18:51:13.116227   28466 fix.go:56] duration metric: took 1m31.608462639s for fixHost
	I0924 18:51:13.116245   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:51:13.119152   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.119484   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:51:13.119507   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.119696   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:51:13.119893   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:51:13.120022   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:51:13.120150   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:51:13.120266   28466 main.go:141] libmachine: Using SSH client type: native
	I0924 18:51:13.120454   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0924 18:51:13.120466   28466 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 18:51:13.239336   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727203873.209544799
	
	I0924 18:51:13.239356   28466 fix.go:216] guest clock: 1727203873.209544799
	I0924 18:51:13.239365   28466 fix.go:229] Guest: 2024-09-24 18:51:13.209544799 +0000 UTC Remote: 2024-09-24 18:51:13.116232987 +0000 UTC m=+91.734483744 (delta=93.311812ms)
	I0924 18:51:13.239396   28466 fix.go:200] guest clock delta is within tolerance: 93.311812ms
	I0924 18:51:13.239402   28466 start.go:83] releasing machines lock for "ha-685475", held for 1m31.731654477s
	I0924 18:51:13.239426   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:51:13.239702   28466 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:51:13.242484   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.242890   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:51:13.242915   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.243055   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:51:13.243574   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:51:13.243740   28466 main.go:141] libmachine: (ha-685475) Calling .DriverName
	I0924 18:51:13.243820   28466 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 18:51:13.243852   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:51:13.243943   28466 ssh_runner.go:195] Run: cat /version.json
	I0924 18:51:13.243963   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHHostname
	I0924 18:51:13.246494   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.246586   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.246861   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:51:13.246884   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.246911   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:51:13.246925   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:13.247052   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:51:13.247146   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHPort
	I0924 18:51:13.247218   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:51:13.247276   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHKeyPath
	I0924 18:51:13.247332   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:51:13.247384   28466 main.go:141] libmachine: (ha-685475) Calling .GetSSHUsername
	I0924 18:51:13.247462   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:51:13.247483   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/ha-685475/id_rsa Username:docker}
	I0924 18:51:13.323180   28466 ssh_runner.go:195] Run: systemctl --version
	I0924 18:51:13.345830   28466 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 18:51:13.497788   28466 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 18:51:13.503037   28466 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 18:51:13.503095   28466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:51:13.511308   28466 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0924 18:51:13.511326   28466 start.go:495] detecting cgroup driver to use...
	I0924 18:51:13.511381   28466 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 18:51:13.526534   28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 18:51:13.540133   28466 docker.go:217] disabling cri-docker service (if available) ...
	I0924 18:51:13.540182   28466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 18:51:13.553431   28466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 18:51:13.566458   28466 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 18:51:13.725268   28466 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 18:51:13.878455   28466 docker.go:233] disabling docker service ...
	I0924 18:51:13.878528   28466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 18:51:13.897552   28466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 18:51:13.910756   28466 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 18:51:14.059929   28466 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 18:51:14.221349   28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 18:51:14.235950   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:51:14.253798   28466 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 18:51:14.253871   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:51:14.264318   28466 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 18:51:14.264386   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:51:14.274458   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:51:14.284280   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:51:14.294214   28466 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:51:14.304407   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:51:14.314343   28466 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:51:14.324682   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 18:51:14.336710   28466 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:51:14.345836   28466 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:51:14.355292   28466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:51:14.499840   28466 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 18:51:19.440001   28466 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.940112457s)
	I0924 18:51:19.440030   28466 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 18:51:19.440083   28466 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 18:51:19.444889   28466 start.go:563] Will wait 60s for crictl version
	I0924 18:51:19.444936   28466 ssh_runner.go:195] Run: which crictl
	I0924 18:51:19.448552   28466 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 18:51:19.485550   28466 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 18:51:19.485641   28466 ssh_runner.go:195] Run: crio --version
	I0924 18:51:19.513377   28466 ssh_runner.go:195] Run: crio --version
	I0924 18:51:19.543102   28466 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 18:51:19.544497   28466 main.go:141] libmachine: (ha-685475) Calling .GetIP
	I0924 18:51:19.547112   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:19.547442   28466 main.go:141] libmachine: (ha-685475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:26:52", ip: ""} in network mk-ha-685475: {Iface:virbr1 ExpiryTime:2024-09-24 19:40:49 +0000 UTC Type:0 Mac:52:54:00:bb:26:52 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-685475 Clientid:01:52:54:00:bb:26:52}
	I0924 18:51:19.547465   28466 main.go:141] libmachine: (ha-685475) DBG | domain ha-685475 has defined IP address 192.168.39.7 and MAC address 52:54:00:bb:26:52 in network mk-ha-685475
	I0924 18:51:19.547660   28466 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 18:51:19.552108   28466 kubeadm.go:883] updating cluster {Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.123 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 18:51:19.552295   28466 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:51:19.552356   28466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:51:19.593827   28466 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 18:51:19.593856   28466 crio.go:433] Images already preloaded, skipping extraction
	I0924 18:51:19.593907   28466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 18:51:19.625890   28466 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 18:51:19.625909   28466 cache_images.go:84] Images are preloaded, skipping loading
	I0924 18:51:19.625917   28466 kubeadm.go:934] updating node { 192.168.39.7 8443 v1.31.1 crio true true} ...
	I0924 18:51:19.625996   28466 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-685475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 18:51:19.626053   28466 ssh_runner.go:195] Run: crio config
	I0924 18:51:19.670333   28466 cni.go:84] Creating CNI manager for ""
	I0924 18:51:19.670351   28466 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0924 18:51:19.670359   28466 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 18:51:19.670378   28466 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-685475 NodeName:ha-685475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 18:51:19.670530   28466 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-685475"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 18:51:19.670555   28466 kube-vip.go:115] generating kube-vip config ...
	I0924 18:51:19.670605   28466 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 18:51:19.681434   28466 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 18:51:19.681567   28466 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0924 18:51:19.681634   28466 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:51:19.690576   28466 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 18:51:19.690652   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0924 18:51:19.699728   28466 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0924 18:51:19.715233   28466 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:51:19.730436   28466 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0924 18:51:19.745596   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0924 18:51:19.762934   28466 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 18:51:19.766460   28466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 18:51:19.910949   28466 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 18:51:19.924822   28466 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475 for IP: 192.168.39.7
	I0924 18:51:19.924848   28466 certs.go:194] generating shared ca certs ...
	I0924 18:51:19.924865   28466 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:51:19.925032   28466 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 18:51:19.925090   28466 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 18:51:19.925106   28466 certs.go:256] generating profile certs ...
	I0924 18:51:19.925212   28466 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/client.key
	I0924 18:51:19.925243   28466 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.4e1f7038
	I0924 18:51:19.925263   28466 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.4e1f7038 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.7 192.168.39.17 192.168.39.84 192.168.39.254]
	I0924 18:51:20.052965   28466 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.4e1f7038 ...
	I0924 18:51:20.052996   28466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.4e1f7038: {Name:mk85a34bb2d27d29b43a53b52a4110514c1f2ddd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:51:20.053193   28466 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.4e1f7038 ...
	I0924 18:51:20.053210   28466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.4e1f7038: {Name:mk517342573979c2bae667d9fe14d0191c724102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:51:20.053305   28466 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt.4e1f7038 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt
	I0924 18:51:20.053471   28466 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key.4e1f7038 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key
	I0924 18:51:20.053635   28466 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key
	I0924 18:51:20.053653   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 18:51:20.053672   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 18:51:20.053690   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 18:51:20.053707   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 18:51:20.053723   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 18:51:20.053737   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 18:51:20.053755   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 18:51:20.053772   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 18:51:20.053834   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 18:51:20.053877   28466 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 18:51:20.053895   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 18:51:20.053928   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 18:51:20.053957   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:51:20.053984   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 18:51:20.054049   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 18:51:20.054082   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:51:20.054103   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem -> /usr/share/ca-certificates/10949.pem
	I0924 18:51:20.054121   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /usr/share/ca-certificates/109492.pem
	I0924 18:51:20.054693   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:51:20.078522   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 18:51:20.101679   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:51:20.123948   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 18:51:20.145466   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0924 18:51:20.167037   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 18:51:20.235432   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:51:20.268335   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/ha-685475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 18:51:20.320288   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:51:20.384090   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 18:51:20.451905   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 18:51:20.618753   28466 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 18:51:20.732295   28466 ssh_runner.go:195] Run: openssl version
	I0924 18:51:20.756467   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:51:20.801914   28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:51:20.842090   28466 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:51:20.842152   28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:51:20.872234   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:51:20.950378   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 18:51:21.017453   28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 18:51:21.045197   28466 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 18:51:21.045268   28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 18:51:21.087738   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 18:51:21.117324   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 18:51:21.152311   28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 18:51:21.160918   28466 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 18:51:21.160974   28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 18:51:21.173180   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 18:51:21.261879   28466 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:51:21.284126   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 18:51:21.299535   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 18:51:21.318232   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 18:51:21.332109   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 18:51:21.346404   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 18:51:21.352417   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 18:51:21.361303   28466 kubeadm.go:392] StartCluster: {Name:ha-685475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-685475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.123 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:51:21.361397   28466 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 18:51:21.361438   28466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 18:51:21.417408   28466 cri.go:89] found id: "28b9e54f0d805a89d2c497b59fe411e07dfc1fc8b1753e8e8ec7864fcae8bee6"
	I0924 18:51:21.417431   28466 cri.go:89] found id: "fd3e8519755a0afd9ecac3c02eeddea0f5694cde2c354f3b85316f1febea8706"
	I0924 18:51:21.417438   28466 cri.go:89] found id: "f1e1b3423dfcf07f1fd0623359b74b974b452181b48fc22186f094acd2244aed"
	I0924 18:51:21.417442   28466 cri.go:89] found id: "98174055d6b7079883d3f1908a9af8e00eba277292db6ff3671d4c35c115d3bf"
	I0924 18:51:21.417446   28466 cri.go:89] found id: "7c127d68bc74bd71c0a1e6e422d8d417299f860f8790701ec8a1dfa5af2abc18"
	I0924 18:51:21.417450   28466 cri.go:89] found id: "f14327e75ef8881e6cebafe23846d5ca345c156417d3b09147db4c76262b2936"
	I0924 18:51:21.417454   28466 cri.go:89] found id: "4411ef38af3f8e892cb7fb1aff0ea178734c6baf3f9dc0486dec91905b263da9"
	I0924 18:51:21.417458   28466 cri.go:89] found id: "7f9104d190f07befbd09ee466b024746ff7b2b398de183cd085ea33f265a2da8"
	I0924 18:51:21.417462   28466 cri.go:89] found id: "15accc82e018bbcea04a32d89aede0d281ce0186e37eea6844ffa844172f9e4e"
	I0924 18:51:21.417468   28466 cri.go:89] found id: "97afe98b678e4be38b759ea6cb446891cc336ed41021ba6bbb86be29a18b6dbd"
	I0924 18:51:21.417471   28466 cri.go:89] found id: "2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235"
	I0924 18:51:21.417475   28466 cri.go:89] found id: "7101ffaf02677078c4490807a7a38b8b8077a8323b00e1ef6c7c52dfdf7c323e"
	I0924 18:51:21.417479   28466 cri.go:89] found id: "75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f"
	I0924 18:51:21.417484   28466 cri.go:89] found id: "709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678"
	I0924 18:51:21.417488   28466 cri.go:89] found id: "9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9"
	I0924 18:51:21.417493   28466 cri.go:89] found id: "40f5664db9017d6a2a0453e30fcd1e13eb349124974c1e07a2d0ba8f50e4c50a"
	I0924 18:51:21.417496   28466 cri.go:89] found id: "e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc"
	I0924 18:51:21.417501   28466 cri.go:89] found id: "efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707"
	I0924 18:51:21.417505   28466 cri.go:89] found id: "5686da29f7aac356415909bb9de609cb333671f4d7afedbbc9f9e3f5647c2ad8"
	I0924 18:51:21.417510   28466 cri.go:89] found id: "838b3cda70bf156ac535f7619ac9923a7505a57c051985fca0a7bc98d8856aad"
	I0924 18:51:21.417515   28466 cri.go:89] found id: ""
	I0924 18:51:21.417558   28466 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.850944085Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204184850916402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1acc80a3-4942-4b50-b93f-02d5cffd0f42 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.854640391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca53e0c6-8ef6-4509-93e2-028848050e71 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.859873682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca53e0c6-8ef6-4509-93e2-028848050e71 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.860374834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:807ce3be93776792ee9c4decbf9887d6a889ccc6974169c8600d689f2003c93f,PodSandboxId:033dbea435ae9a4920575793e793aaa2c894e887aef5cc4a6a9d72e48a8de59d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203963236497252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33767687f698e19ef02f2b404b1b8efba384d05dab96467efec9c8782611ea69,PodSandboxId:4216938b3e9dcb5976611db8c6450fbd29b330049bee92fb733fc0da779bf623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203926247308722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195465ccb45fe6eed300efd9e005fbfcb794270def796f2c41b0d287a21789a4,PodSandboxId:0393248f7c5f7300522fd261b35bbe13202f7c73815c362e8b917f0819f7628b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203925241334408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1af26be5f5a40d4801406402fa3cecd62db912bb2008d5a26397e41d5a340d78,PodSandboxId:ef4a912012e54b91c2f83ede968d31644d7ac8c9bbd0ebb79d8a2cf530af7abd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203924139094029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd74353f8eea359b8dbe00fb463b865c5371eb2c3eeef0c52294af03c7ace88,PodSandboxId:033dbea435ae9a4920575793e793aaa2c894e887aef5cc4a6a9d72e48a8de59d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727203914233578983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0ae32a98e68a54e285407c0f37700d28905ca8558f6adc49a4c5666ecee37a7,PodSandboxId:fc3c654a42b04dc8370c133bc9a892986c87e753f4121844fc8e7d658edba1d7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203895900053645,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8416a5af8d7d99ec65aad9fafe08d700,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624ae9ed966d269bcdf339f5aed90069a73b9851203deb7e89f7ce9d3d9ce3e8,PodSandboxId:762235b133b6c8eb820c9ca527ac4c8bbbbfa06dd46bd5b951b1b235ba800326,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203880775128111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:28b9e54f0d805a89d2c497b59fe411e07dfc1fc8b1753e8e8ec7864fcae8bee6,PodSandboxId:8f9c03feb87b30b9a3fbe54c20ada29c1d80b8200c676570c0ba6165436e226c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203881069422086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3e8519755a0afd9ecac3c02eeddea0f5694cde2c354f3b85316f1febea8706,PodSandboxId:622f4021ab51a7a66e7e53ba2c52091cbdf2ad1702f661431f701c6f173c1ef3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203880989696351,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e1b3423dfcf07f1fd0623359b74b974b452181b48fc22186f094acd2244aed,PodSandboxId:3449d7cbbba1b5e47516443f6435a518be449418c4c4a153383cdb50157ed007,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727203880913154978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98174055d6b7079883d3f1908a9af8e00eba277292db6ff3671d4c35c115d3bf,PodSandboxId:b3a57b94e74b8b39b776cef85546f16519b41aa7df47f87b415a329d75b41bb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203880676225270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c127d68bc74bd71c0a1e6e422d8d417299f860f8790701ec8a1dfa5af2abc18,PodSandboxId:8ee0b5e39414e710ee1ac9bbb106b2b8da5971013b7670a0907a2dd204ff409b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203880648001895,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a4
74c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14327e75ef8881e6cebafe23846d5ca345c156417d3b09147db4c76262b2936,PodSandboxId:0393248f7c5f7300522fd261b35bbe13202f7c73815c362e8b917f0819f7628b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727203880582361698,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411ef38af3f8e892cb7fb1aff0ea178734c6baf3f9dc0486dec91905b263da9,PodSandboxId:4216938b3e9dcb5976611db8c6450fbd29b330049bee92fb733fc0da779bf623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727203880525163207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727203432777065073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727203297608236595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727203297571348809,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727203285606323442,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727203285407501690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727203273109965316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727203273059979529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca53e0c6-8ef6-4509-93e2-028848050e71 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.910940972Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a0b8348e-7387-4784-9ca0-332069a476dc name=/runtime.v1.RuntimeService/Version
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.911015453Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a0b8348e-7387-4784-9ca0-332069a476dc name=/runtime.v1.RuntimeService/Version
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.912315308Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a1ddd45-1854-40b8-8b8b-0b2ed9b7599c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.912761668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204184912736341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a1ddd45-1854-40b8-8b8b-0b2ed9b7599c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.913483808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19745615-a46b-493d-97e4-41921d8ec853 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.913544409Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19745615-a46b-493d-97e4-41921d8ec853 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.913970628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:807ce3be93776792ee9c4decbf9887d6a889ccc6974169c8600d689f2003c93f,PodSandboxId:033dbea435ae9a4920575793e793aaa2c894e887aef5cc4a6a9d72e48a8de59d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203963236497252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33767687f698e19ef02f2b404b1b8efba384d05dab96467efec9c8782611ea69,PodSandboxId:4216938b3e9dcb5976611db8c6450fbd29b330049bee92fb733fc0da779bf623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203926247308722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195465ccb45fe6eed300efd9e005fbfcb794270def796f2c41b0d287a21789a4,PodSandboxId:0393248f7c5f7300522fd261b35bbe13202f7c73815c362e8b917f0819f7628b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203925241334408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1af26be5f5a40d4801406402fa3cecd62db912bb2008d5a26397e41d5a340d78,PodSandboxId:ef4a912012e54b91c2f83ede968d31644d7ac8c9bbd0ebb79d8a2cf530af7abd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203924139094029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd74353f8eea359b8dbe00fb463b865c5371eb2c3eeef0c52294af03c7ace88,PodSandboxId:033dbea435ae9a4920575793e793aaa2c894e887aef5cc4a6a9d72e48a8de59d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727203914233578983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0ae32a98e68a54e285407c0f37700d28905ca8558f6adc49a4c5666ecee37a7,PodSandboxId:fc3c654a42b04dc8370c133bc9a892986c87e753f4121844fc8e7d658edba1d7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203895900053645,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8416a5af8d7d99ec65aad9fafe08d700,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624ae9ed966d269bcdf339f5aed90069a73b9851203deb7e89f7ce9d3d9ce3e8,PodSandboxId:762235b133b6c8eb820c9ca527ac4c8bbbbfa06dd46bd5b951b1b235ba800326,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203880775128111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:28b9e54f0d805a89d2c497b59fe411e07dfc1fc8b1753e8e8ec7864fcae8bee6,PodSandboxId:8f9c03feb87b30b9a3fbe54c20ada29c1d80b8200c676570c0ba6165436e226c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203881069422086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3e8519755a0afd9ecac3c02eeddea0f5694cde2c354f3b85316f1febea8706,PodSandboxId:622f4021ab51a7a66e7e53ba2c52091cbdf2ad1702f661431f701c6f173c1ef3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203880989696351,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e1b3423dfcf07f1fd0623359b74b974b452181b48fc22186f094acd2244aed,PodSandboxId:3449d7cbbba1b5e47516443f6435a518be449418c4c4a153383cdb50157ed007,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727203880913154978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98174055d6b7079883d3f1908a9af8e00eba277292db6ff3671d4c35c115d3bf,PodSandboxId:b3a57b94e74b8b39b776cef85546f16519b41aa7df47f87b415a329d75b41bb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203880676225270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c127d68bc74bd71c0a1e6e422d8d417299f860f8790701ec8a1dfa5af2abc18,PodSandboxId:8ee0b5e39414e710ee1ac9bbb106b2b8da5971013b7670a0907a2dd204ff409b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203880648001895,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a4
74c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14327e75ef8881e6cebafe23846d5ca345c156417d3b09147db4c76262b2936,PodSandboxId:0393248f7c5f7300522fd261b35bbe13202f7c73815c362e8b917f0819f7628b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727203880582361698,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411ef38af3f8e892cb7fb1aff0ea178734c6baf3f9dc0486dec91905b263da9,PodSandboxId:4216938b3e9dcb5976611db8c6450fbd29b330049bee92fb733fc0da779bf623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727203880525163207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727203432777065073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727203297608236595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727203297571348809,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727203285606323442,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727203285407501690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727203273109965316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727203273059979529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19745615-a46b-493d-97e4-41921d8ec853 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.957682059Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da27f975-2b7f-45f6-a429-e189baf853cc name=/runtime.v1.RuntimeService/Version
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.957760373Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da27f975-2b7f-45f6-a429-e189baf853cc name=/runtime.v1.RuntimeService/Version
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.958653833Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=310c4a00-ab06-4fed-9b69-3d43b72a217d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.959221676Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204184959198552,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=310c4a00-ab06-4fed-9b69-3d43b72a217d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.959745133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6f28e87-eb4d-47b7-979c-10fe52f14c93 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.959831239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6f28e87-eb4d-47b7-979c-10fe52f14c93 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:56:24 ha-685475 crio[3617]: time="2024-09-24 18:56:24.960267927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:807ce3be93776792ee9c4decbf9887d6a889ccc6974169c8600d689f2003c93f,PodSandboxId:033dbea435ae9a4920575793e793aaa2c894e887aef5cc4a6a9d72e48a8de59d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203963236497252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33767687f698e19ef02f2b404b1b8efba384d05dab96467efec9c8782611ea69,PodSandboxId:4216938b3e9dcb5976611db8c6450fbd29b330049bee92fb733fc0da779bf623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203926247308722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195465ccb45fe6eed300efd9e005fbfcb794270def796f2c41b0d287a21789a4,PodSandboxId:0393248f7c5f7300522fd261b35bbe13202f7c73815c362e8b917f0819f7628b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203925241334408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1af26be5f5a40d4801406402fa3cecd62db912bb2008d5a26397e41d5a340d78,PodSandboxId:ef4a912012e54b91c2f83ede968d31644d7ac8c9bbd0ebb79d8a2cf530af7abd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203924139094029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd74353f8eea359b8dbe00fb463b865c5371eb2c3eeef0c52294af03c7ace88,PodSandboxId:033dbea435ae9a4920575793e793aaa2c894e887aef5cc4a6a9d72e48a8de59d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727203914233578983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0ae32a98e68a54e285407c0f37700d28905ca8558f6adc49a4c5666ecee37a7,PodSandboxId:fc3c654a42b04dc8370c133bc9a892986c87e753f4121844fc8e7d658edba1d7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203895900053645,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8416a5af8d7d99ec65aad9fafe08d700,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624ae9ed966d269bcdf339f5aed90069a73b9851203deb7e89f7ce9d3d9ce3e8,PodSandboxId:762235b133b6c8eb820c9ca527ac4c8bbbbfa06dd46bd5b951b1b235ba800326,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203880775128111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:28b9e54f0d805a89d2c497b59fe411e07dfc1fc8b1753e8e8ec7864fcae8bee6,PodSandboxId:8f9c03feb87b30b9a3fbe54c20ada29c1d80b8200c676570c0ba6165436e226c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203881069422086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3e8519755a0afd9ecac3c02eeddea0f5694cde2c354f3b85316f1febea8706,PodSandboxId:622f4021ab51a7a66e7e53ba2c52091cbdf2ad1702f661431f701c6f173c1ef3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203880989696351,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e1b3423dfcf07f1fd0623359b74b974b452181b48fc22186f094acd2244aed,PodSandboxId:3449d7cbbba1b5e47516443f6435a518be449418c4c4a153383cdb50157ed007,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727203880913154978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98174055d6b7079883d3f1908a9af8e00eba277292db6ff3671d4c35c115d3bf,PodSandboxId:b3a57b94e74b8b39b776cef85546f16519b41aa7df47f87b415a329d75b41bb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203880676225270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c127d68bc74bd71c0a1e6e422d8d417299f860f8790701ec8a1dfa5af2abc18,PodSandboxId:8ee0b5e39414e710ee1ac9bbb106b2b8da5971013b7670a0907a2dd204ff409b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203880648001895,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a4
74c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14327e75ef8881e6cebafe23846d5ca345c156417d3b09147db4c76262b2936,PodSandboxId:0393248f7c5f7300522fd261b35bbe13202f7c73815c362e8b917f0819f7628b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727203880582361698,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411ef38af3f8e892cb7fb1aff0ea178734c6baf3f9dc0486dec91905b263da9,PodSandboxId:4216938b3e9dcb5976611db8c6450fbd29b330049bee92fb733fc0da779bf623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727203880525163207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727203432777065073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727203297608236595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727203297571348809,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727203285606323442,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727203285407501690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727203273109965316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727203273059979529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6f28e87-eb4d-47b7-979c-10fe52f14c93 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:56:25 ha-685475 crio[3617]: time="2024-09-24 18:56:25.000678777Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=397729c8-88a1-4fe4-bd3d-4f9f0562bced name=/runtime.v1.RuntimeService/Version
	Sep 24 18:56:25 ha-685475 crio[3617]: time="2024-09-24 18:56:25.000751912Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=397729c8-88a1-4fe4-bd3d-4f9f0562bced name=/runtime.v1.RuntimeService/Version
	Sep 24 18:56:25 ha-685475 crio[3617]: time="2024-09-24 18:56:25.001858729Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a200b97-bfa7-41dc-92b1-93295bd80499 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:56:25 ha-685475 crio[3617]: time="2024-09-24 18:56:25.003156999Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204185003093772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a200b97-bfa7-41dc-92b1-93295bd80499 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 18:56:25 ha-685475 crio[3617]: time="2024-09-24 18:56:25.004100260Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0b90234-b402-4285-9e6f-a4ff5a2d46b4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:56:25 ha-685475 crio[3617]: time="2024-09-24 18:56:25.004198198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0b90234-b402-4285-9e6f-a4ff5a2d46b4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 18:56:25 ha-685475 crio[3617]: time="2024-09-24 18:56:25.004966311Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:807ce3be93776792ee9c4decbf9887d6a889ccc6974169c8600d689f2003c93f,PodSandboxId:033dbea435ae9a4920575793e793aaa2c894e887aef5cc4a6a9d72e48a8de59d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727203963236497252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33767687f698e19ef02f2b404b1b8efba384d05dab96467efec9c8782611ea69,PodSandboxId:4216938b3e9dcb5976611db8c6450fbd29b330049bee92fb733fc0da779bf623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727203926247308722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195465ccb45fe6eed300efd9e005fbfcb794270def796f2c41b0d287a21789a4,PodSandboxId:0393248f7c5f7300522fd261b35bbe13202f7c73815c362e8b917f0819f7628b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727203925241334408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1af26be5f5a40d4801406402fa3cecd62db912bb2008d5a26397e41d5a340d78,PodSandboxId:ef4a912012e54b91c2f83ede968d31644d7ac8c9bbd0ebb79d8a2cf530af7abd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727203924139094029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd74353f8eea359b8dbe00fb463b865c5371eb2c3eeef0c52294af03c7ace88,PodSandboxId:033dbea435ae9a4920575793e793aaa2c894e887aef5cc4a6a9d72e48a8de59d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727203914233578983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f5497a-ae6d-4051-b1bc-c84c91d0fd12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0ae32a98e68a54e285407c0f37700d28905ca8558f6adc49a4c5666ecee37a7,PodSandboxId:fc3c654a42b04dc8370c133bc9a892986c87e753f4121844fc8e7d658edba1d7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727203895900053645,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8416a5af8d7d99ec65aad9fafe08d700,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624ae9ed966d269bcdf339f5aed90069a73b9851203deb7e89f7ce9d3d9ce3e8,PodSandboxId:762235b133b6c8eb820c9ca527ac4c8bbbbfa06dd46bd5b951b1b235ba800326,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727203880775128111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:28b9e54f0d805a89d2c497b59fe411e07dfc1fc8b1753e8e8ec7864fcae8bee6,PodSandboxId:8f9c03feb87b30b9a3fbe54c20ada29c1d80b8200c676570c0ba6165436e226c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203881069422086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3e8519755a0afd9ecac3c02eeddea0f5694cde2c354f3b85316f1febea8706,PodSandboxId:622f4021ab51a7a66e7e53ba2c52091cbdf2ad1702f661431f701c6f173c1ef3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727203880989696351,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e1b3423dfcf07f1fd0623359b74b974b452181b48fc22186f094acd2244aed,PodSandboxId:3449d7cbbba1b5e47516443f6435a518be449418c4c4a153383cdb50157ed007,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727203880913154978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98174055d6b7079883d3f1908a9af8e00eba277292db6ff3671d4c35c115d3bf,PodSandboxId:b3a57b94e74b8b39b776cef85546f16519b41aa7df47f87b415a329d75b41bb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727203880676225270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c127d68bc74bd71c0a1e6e422d8d417299f860f8790701ec8a1dfa5af2abc18,PodSandboxId:8ee0b5e39414e710ee1ac9bbb106b2b8da5971013b7670a0907a2dd204ff409b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727203880648001895,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a4
74c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14327e75ef8881e6cebafe23846d5ca345c156417d3b09147db4c76262b2936,PodSandboxId:0393248f7c5f7300522fd261b35bbe13202f7c73815c362e8b917f0819f7628b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727203880582361698,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7d23a9b6fbfe2d9aa17cf12d65a47,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4411ef38af3f8e892cb7fb1aff0ea178734c6baf3f9dc0486dec91905b263da9,PodSandboxId:4216938b3e9dcb5976611db8c6450fbd29b330049bee92fb733fc0da779bf623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727203880525163207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9ba6147bd78ec5c916c82e075c53f,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b86d48937d8427b98b70e1dd11748ebb1ed5ced64576e967a855a01f7cede4f,PodSandboxId:2517ecd8d61cdecc6476f2a74913933bd7e9454300a5d6d1a49316a4df502d17,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727203432777065073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmkfk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d4c0c92-3c76-478a-b298-c9a7ab9e3995,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235,PodSandboxId:c2c9f0a12f919389294f158ad3389e1b52f2b82080c370082a4bd3882499387d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727203297608236595,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf7wr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a616493e-082e-4ae6-8e12-8c4a2b37a985,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f,PodSandboxId:9f53b2b4e4e295c0dbc2e74129f2ee59edb419ff7864d0f238d7a8592539deca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727203297571348809,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fchhl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc58fefc-6210-4b70-bd0d-dbf5b093e09a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678,PodSandboxId:6c65efd7365057290e5c13d22e1c27c06594857da4ddf66ff1e281341f9e22dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727203285606323442,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ms6qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60485f55-3830-4897-b38e-55779662b999,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9,PodSandboxId:bbb4cec8188185145896fe49daa6ed030a5ecf1248a3fd51c6afa5f3730a0231,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727203285407501690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8x2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95e65f4e-7461-479a-8743-ce4f891abfcf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc,PodSandboxId:9ade6d826e1256fab7ac1508cbdcf6e2c2b599c6946fd3b86a9224bff5d5c7ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727203273109965316,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590516ed80b227ea320a474c3a9ebfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707,PodSandboxId:5fa1209cd75b83fbb1e131b86057b94740a7eecd17e8ee34b480a0a2ad496464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727203273059979529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-685475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f76c4b882e3909126cd21d4982493e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0b90234-b402-4285-9e6f-a4ff5a2d46b4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	807ce3be93776       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   033dbea435ae9       storage-provisioner
	33767687f698e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   2                   4216938b3e9dc       kube-controller-manager-ha-685475
	195465ccb45fe       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   0393248f7c5f7       kube-apiserver-ha-685475
	1af26be5f5a40       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   ef4a912012e54       busybox-7dff88458-hmkfk
	5fd74353f8eea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   033dbea435ae9       storage-provisioner
	d0ae32a98e68a       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   fc3c654a42b04       kube-vip-ha-685475
	28b9e54f0d805       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   8f9c03feb87b3       coredns-7c65d6cfc9-jf7wr
	fd3e8519755a0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   622f4021ab51a       coredns-7c65d6cfc9-fchhl
	f1e1b3423dfcf       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   3449d7cbbba1b       kindnet-ms6qb
	624ae9ed966d2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                1                   762235b133b6c       kube-proxy-b8x2w
	98174055d6b70       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   b3a57b94e74b8       etcd-ha-685475
	7c127d68bc74b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago       Running             kube-scheduler            1                   8ee0b5e39414e       kube-scheduler-ha-685475
	f14327e75ef88       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Exited              kube-apiserver            2                   0393248f7c5f7       kube-apiserver-ha-685475
	4411ef38af3f8       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Exited              kube-controller-manager   1                   4216938b3e9dc       kube-controller-manager-ha-685475
	9b86d48937d84       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   12 minutes ago      Exited              busybox                   0                   2517ecd8d61cd       busybox-7dff88458-hmkfk
	2c7b4241a9158       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      14 minutes ago      Exited              coredns                   0                   c2c9f0a12f919       coredns-7c65d6cfc9-jf7wr
	75aac96a2239b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      14 minutes ago      Exited              coredns                   0                   9f53b2b4e4e29       coredns-7c65d6cfc9-fchhl
	709da73468c82       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      14 minutes ago      Exited              kindnet-cni               0                   6c65efd736505       kindnet-ms6qb
	9ea87ecceac1c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      14 minutes ago      Exited              kube-proxy                0                   bbb4cec818818       kube-proxy-b8x2w
	e62a02dab3075       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      15 minutes ago      Exited              kube-scheduler            0                   9ade6d826e125       kube-scheduler-ha-685475
	efe5b6f3ceb69       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      15 minutes ago      Exited              etcd                      0                   5fa1209cd75b8       etcd-ha-685475
	
	
	==> coredns [28b9e54f0d805a89d2c497b59fe411e07dfc1fc8b1753e8e8ec7864fcae8bee6] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:33012->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[115397641]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (24-Sep-2024 18:51:32.982) (total time: 10119ms):
	Trace[115397641]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:33012->10.96.0.1:443: read: connection reset by peer 10119ms (18:51:43.102)
	Trace[115397641]: [10.119575308s] [10.119575308s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:33046->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:33012->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:33046->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [2c7b4241a915862e8f9fd4f1495a9d065e7db7349a2f2257c4e5845f4d9a6235] <==
	[INFO] 10.244.1.2:44949 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002196411s
	[INFO] 10.244.1.2:57646 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132442s
	[INFO] 10.244.1.2:45986 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001533759s
	[INFO] 10.244.1.2:56859 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159221s
	[INFO] 10.244.1.2:47730 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122802s
	[INFO] 10.244.2.2:49373 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174893s
	[INFO] 10.244.0.4:52492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008787s
	[INFO] 10.244.0.4:33570 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049583s
	[INFO] 10.244.0.4:35717 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000036153s
	[INFO] 10.244.1.2:39348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000262289s
	[INFO] 10.244.1.2:44144 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000216176s
	[INFO] 10.244.1.2:37532 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00017928s
	[INFO] 10.244.2.2:34536 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139562s
	[INFO] 10.244.0.4:43378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108735s
	[INFO] 10.244.0.4:50975 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139299s
	[INFO] 10.244.0.4:36798 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091581s
	[INFO] 10.244.1.2:55450 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136524s
	[INFO] 10.244.1.2:46887 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00019253s
	[INFO] 10.244.1.2:39275 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113225s
	[INFO] 10.244.1.2:44182 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097101s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	
	
	==> coredns [75aac96a2239bdeee54221fe09253477e1467078c03b1deeefce79d4bbaf157f] <==
	[INFO] 10.244.1.2:39503 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018791s
	[INFO] 10.244.1.2:56200 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000107364s
	[INFO] 10.244.1.2:50181 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000477328s
	[INFO] 10.244.2.2:48517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149349s
	[INFO] 10.244.2.2:37426 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161156s
	[INFO] 10.244.2.2:51780 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245454s
	[INFO] 10.244.0.4:37360 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00192766s
	[INFO] 10.244.0.4:49282 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067708s
	[INFO] 10.244.0.4:50475 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000049077s
	[INFO] 10.244.0.4:42734 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103381s
	[INFO] 10.244.1.2:34090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126966s
	[INFO] 10.244.1.2:49474 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199973s
	[INFO] 10.244.1.2:47488 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080517s
	[INFO] 10.244.2.2:58501 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129358s
	[INFO] 10.244.2.2:35831 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166592s
	[INFO] 10.244.2.2:46260 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105019s
	[INFO] 10.244.0.4:34512 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070631s
	[INFO] 10.244.1.2:40219 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095437s
	[INFO] 10.244.2.2:45584 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000263954s
	[INFO] 10.244.2.2:45346 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105804s
	[INFO] 10.244.2.2:33451 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099783s
	[INFO] 10.244.0.4:54263 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102026s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1730&timeout=9m57s&timeoutSeconds=597&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [fd3e8519755a0afd9ecac3c02eeddea0f5694cde2c354f3b85316f1febea8706] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:48120->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[233214872]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (24-Sep-2024 18:51:32.754) (total time: 10348ms):
	Trace[233214872]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:48120->10.96.0.1:443: read: connection reset by peer 10348ms (18:51:43.103)
	Trace[233214872]: [10.348226851s] [10.348226851s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:48120->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-685475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T18_41_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:41:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:56:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:52:05 +0000   Tue, 24 Sep 2024 18:41:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:52:05 +0000   Tue, 24 Sep 2024 18:41:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:52:05 +0000   Tue, 24 Sep 2024 18:41:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:52:05 +0000   Tue, 24 Sep 2024 18:41:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-685475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d6728db94ca4a90af6f3c76683b52c2
	  System UUID:                7d6728db-94ca-4a90-af6f-3c76683b52c2
	  Boot ID:                    d6338982-1afe-44d6-a104-48e80df984ae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hmkfk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-fchhl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-jf7wr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-685475                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-ms6qb                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-685475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-685475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-b8x2w                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-685475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-685475                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 4m21s                 kube-proxy       
	  Normal   Starting                 14m                   kube-proxy       
	  Normal   NodeHasSufficientMemory  15m                   kubelet          Node ha-685475 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                   kubelet          Node ha-685475 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m                   kubelet          Node ha-685475 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                   node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	  Normal   NodeReady                14m                   kubelet          Node ha-685475 status is now: NodeReady
	  Normal   RegisteredNode           14m                   node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	  Normal   RegisteredNode           12m                   node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	  Normal   NodeNotReady             5m19s (x3 over 6m9s)  kubelet          Node ha-685475 status is now: NodeNotReady
	  Warning  ContainerGCFailed        5m6s (x2 over 6m6s)   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m25s                 node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	  Normal   RegisteredNode           4m15s                 node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	  Normal   RegisteredNode           3m21s                 node-controller  Node ha-685475 event: Registered Node ha-685475 in Controller
	
	
	Name:               ha-685475-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T18_42_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:42:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:56:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:52:50 +0000   Tue, 24 Sep 2024 18:52:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:52:50 +0000   Tue, 24 Sep 2024 18:52:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:52:50 +0000   Tue, 24 Sep 2024 18:52:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:52:50 +0000   Tue, 24 Sep 2024 18:52:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-685475-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad56c26961cf4d94852f19122c4c499b
	  System UUID:                ad56c269-61cf-4d94-852f-19122c4c499b
	  Boot ID:                    020aa55b-e97a-436e-ae15-d221276dc925
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w6g8l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-685475-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-pwvfj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-685475-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-685475-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-dlr8f                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-685475-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-685475-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m57s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-685475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-685475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-685475-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  NodeNotReady             10m                    node-controller  Node ha-685475-m02 status is now: NodeNotReady
	  Normal  Starting                 4m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m44s (x8 over 4m44s)  kubelet          Node ha-685475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m44s (x8 over 4m44s)  kubelet          Node ha-685475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m44s (x7 over 4m44s)  kubelet          Node ha-685475-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m25s                  node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	  Normal  RegisteredNode           3m21s                  node-controller  Node ha-685475-m02 event: Registered Node ha-685475-m02 in Controller
	
	
	Name:               ha-685475-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-685475-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=ha-685475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T18_44_24_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:44:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685475-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:53:57 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 24 Sep 2024 18:53:38 +0000   Tue, 24 Sep 2024 18:54:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 24 Sep 2024 18:53:38 +0000   Tue, 24 Sep 2024 18:54:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 24 Sep 2024 18:53:38 +0000   Tue, 24 Sep 2024 18:54:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 24 Sep 2024 18:53:38 +0000   Tue, 24 Sep 2024 18:54:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    ha-685475-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5be0e3597a0f4236b1fa9e5e221d49dc
	  System UUID:                5be0e359-7a0f-4236-b1fa-9e5e221d49dc
	  Boot ID:                    5cca5d30-8e5e-4d33-9fe2-bd3febd4e1d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xrwg8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-n4nlv              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-9m62z           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-685475-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-685475-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-685475-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                    node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal   NodeReady                11m                    kubelet          Node ha-685475-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m25s                  node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal   RegisteredNode           4m15s                  node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal   RegisteredNode           3m21s                  node-controller  Node ha-685475-m04 event: Registered Node ha-685475-m04 in Controller
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-685475-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-685475-m04 status is now: NodeHasSufficientMemory
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-685475-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-685475-m04 has been rebooted, boot id: 5cca5d30-8e5e-4d33-9fe2-bd3febd4e1d0
	  Normal   NodeNotReady             2m48s                  kubelet          Node ha-685475-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m47s                  kubelet          Node ha-685475-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s (x2 over 3m45s)   node-controller  Node ha-685475-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep24 18:41] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.056998] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056884] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.156659] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.148421] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.267579] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +3.782999] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +3.621822] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.062553] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.171108] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.082463] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.344664] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.133235] kauditd_printk_skb: 38 callbacks suppressed
	[Sep24 18:42] kauditd_printk_skb: 24 callbacks suppressed
	[Sep24 18:51] systemd-fstab-generator[3542]: Ignoring "noauto" option for root device
	[  +0.159556] systemd-fstab-generator[3554]: Ignoring "noauto" option for root device
	[  +0.181069] systemd-fstab-generator[3568]: Ignoring "noauto" option for root device
	[  +0.157125] systemd-fstab-generator[3580]: Ignoring "noauto" option for root device
	[  +0.280727] systemd-fstab-generator[3608]: Ignoring "noauto" option for root device
	[  +5.409899] systemd-fstab-generator[3712]: Ignoring "noauto" option for root device
	[  +0.088616] kauditd_printk_skb: 100 callbacks suppressed
	[  +8.405814] kauditd_printk_skb: 107 callbacks suppressed
	[  +7.581119] kauditd_printk_skb: 2 callbacks suppressed
	[Sep24 18:52] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [98174055d6b7079883d3f1908a9af8e00eba277292db6ff3671d4c35c115d3bf] <==
	{"level":"warn","ts":"2024-09-24T18:52:57.480020Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f557fef8b50aff79","error":"Get \"https://192.168.39.84:2380/version\": dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-24T18:52:58.157753Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:52:58.157856Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:52:58.162823Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:52:58.182429Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"bb39151d8411994b","to":"f557fef8b50aff79","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-24T18:52:58.184878Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:52:58.185454Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"bb39151d8411994b","to":"f557fef8b50aff79","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-24T18:52:58.185583Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:53:51.812214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b switched to configuration voters=(2247052033080217693 13490837375279012171)"}
	{"level":"info","ts":"2024-09-24T18:53:51.814328Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"3202df3d6e5aadcb","local-member-id":"bb39151d8411994b","removed-remote-peer-id":"f557fef8b50aff79","removed-remote-peer-urls":["https://192.168.39.84:2380"]}
	{"level":"info","ts":"2024-09-24T18:53:51.814426Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f557fef8b50aff79"}
	{"level":"warn","ts":"2024-09-24T18:53:51.814736Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:53:51.814855Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f557fef8b50aff79"}
	{"level":"warn","ts":"2024-09-24T18:53:51.815446Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:53:51.815526Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:53:51.815613Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	{"level":"warn","ts":"2024-09-24T18:53:51.815851Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79","error":"context canceled"}
	{"level":"warn","ts":"2024-09-24T18:53:51.815931Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"f557fef8b50aff79","error":"failed to read f557fef8b50aff79 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-24T18:53:51.816036Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	{"level":"warn","ts":"2024-09-24T18:53:51.816276Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79","error":"context canceled"}
	{"level":"info","ts":"2024-09-24T18:53:51.816346Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:53:51.816391Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:53:51.816465Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"bb39151d8411994b","removed-remote-peer-id":"f557fef8b50aff79"}
	{"level":"warn","ts":"2024-09-24T18:53:51.830974Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"bb39151d8411994b","remote-peer-id-stream-handler":"bb39151d8411994b","remote-peer-id-from":"f557fef8b50aff79"}
	{"level":"warn","ts":"2024-09-24T18:53:51.830994Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"bb39151d8411994b","remote-peer-id-stream-handler":"bb39151d8411994b","remote-peer-id-from":"f557fef8b50aff79"}
	
	
	==> etcd [efe5b6f3ceb6985c8caf4f2e2bf9bbaa332643825c164d55977f950ac5925707] <==
	2024/09/24 18:49:42 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/24 18:49:42 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-24T18:49:42.403705Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.7:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T18:49:42.403759Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.7:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-24T18:49:42.403846Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"bb39151d8411994b","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-24T18:49:42.404014Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1f2f21eb2f90d05d"}
	{"level":"info","ts":"2024-09-24T18:49:42.404102Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1f2f21eb2f90d05d"}
	{"level":"info","ts":"2024-09-24T18:49:42.404178Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1f2f21eb2f90d05d"}
	{"level":"info","ts":"2024-09-24T18:49:42.404346Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d"}
	{"level":"info","ts":"2024-09-24T18:49:42.404417Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d"}
	{"level":"info","ts":"2024-09-24T18:49:42.404527Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"1f2f21eb2f90d05d"}
	{"level":"info","ts":"2024-09-24T18:49:42.404585Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1f2f21eb2f90d05d"}
	{"level":"info","ts":"2024-09-24T18:49:42.404614Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:49:42.404679Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:49:42.404712Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:49:42.404857Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:49:42.404913Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:49:42.404961Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bb39151d8411994b","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:49:42.404990Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f557fef8b50aff79"}
	{"level":"info","ts":"2024-09-24T18:49:42.407773Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.7:2380"}
	{"level":"warn","ts":"2024-09-24T18:49:42.407884Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.823768217s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-24T18:49:42.407928Z","caller":"traceutil/trace.go:171","msg":"trace[58929591] range","detail":"{range_begin:; range_end:; }","duration":"1.823851418s","start":"2024-09-24T18:49:40.584068Z","end":"2024-09-24T18:49:42.407919Z","steps":["trace[58929591] 'agreement among raft nodes before linearized reading'  (duration: 1.823765319s)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:49:42.408001Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.7:2380"}
	{"level":"info","ts":"2024-09-24T18:49:42.408029Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-685475","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.7:2380"],"advertise-client-urls":["https://192.168.39.7:2379"]}
	{"level":"error","ts":"2024-09-24T18:49:42.408028Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 18:56:25 up 15 min,  0 users,  load average: 0.20, 0.26, 0.23
	Linux ha-685475 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [709da73468c8229302a411066c363ac992d23cfcc4216686c1104435e202b678] <==
	I0924 18:49:06.555439       1 main.go:299] handling current node
	I0924 18:49:16.554897       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:49:16.554987       1 main.go:299] handling current node
	I0924 18:49:16.555013       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:49:16.555030       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:49:16.555161       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:49:16.555184       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:49:16.555240       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:49:16.555259       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:49:26.554865       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:49:26.554905       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:49:26.555089       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:49:26.555110       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:49:26.555160       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:49:26.555178       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:49:26.555224       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:49:26.555242       1 main.go:299] handling current node
	I0924 18:49:36.554936       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:49:36.555097       1 main.go:299] handling current node
	I0924 18:49:36.555133       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:49:36.555152       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:49:36.555314       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0924 18:49:36.555356       1 main.go:322] Node ha-685475-m03 has CIDR [10.244.2.0/24] 
	I0924 18:49:36.555452       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:49:36.555486       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f1e1b3423dfcf07f1fd0623359b74b974b452181b48fc22186f094acd2244aed] <==
	I0924 18:55:41.878400       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:55:51.871153       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:55:51.871184       1 main.go:299] handling current node
	I0924 18:55:51.871200       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:55:51.871205       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:55:51.871316       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:55:51.871337       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:56:01.878686       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:56:01.878783       1 main.go:299] handling current node
	I0924 18:56:01.878877       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:56:01.878901       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:56:01.879024       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:56:01.879044       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:56:11.878856       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:56:11.878963       1 main.go:299] handling current node
	I0924 18:56:11.879002       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:56:11.879023       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	I0924 18:56:11.879139       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:56:11.879159       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:56:21.870126       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0924 18:56:21.870234       1 main.go:322] Node ha-685475-m04 has CIDR [10.244.3.0/24] 
	I0924 18:56:21.870392       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0924 18:56:21.870434       1 main.go:299] handling current node
	I0924 18:56:21.870464       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0924 18:56:21.870482       1 main.go:322] Node ha-685475-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [195465ccb45fe6eed300efd9e005fbfcb794270def796f2c41b0d287a21789a4] <==
	I0924 18:52:07.439120       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0924 18:52:07.439283       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0924 18:52:07.524350       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0924 18:52:07.524436       1 policy_source.go:224] refreshing policies
	I0924 18:52:07.526308       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0924 18:52:07.526526       1 aggregator.go:171] initial CRD sync complete...
	I0924 18:52:07.526575       1 autoregister_controller.go:144] Starting autoregister controller
	I0924 18:52:07.526931       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0924 18:52:07.541498       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0924 18:52:07.558393       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0924 18:52:07.598131       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.17 192.168.39.84]
	I0924 18:52:07.600178       1 controller.go:615] quota admission added evaluator for: endpoints
	I0924 18:52:07.612550       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 18:52:07.614064       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0924 18:52:07.619004       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0924 18:52:07.622567       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0924 18:52:07.626230       1 shared_informer.go:320] Caches are synced for configmaps
	I0924 18:52:07.627304       1 cache.go:39] Caches are synced for autoregister controller
	I0924 18:52:07.631339       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0924 18:52:07.631415       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0924 18:52:07.632167       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0924 18:52:07.632465       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0924 18:52:07.641168       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0924 18:52:08.430474       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0924 18:52:08.730585       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.17 192.168.39.7 192.168.39.84]
	
	
	==> kube-apiserver [f14327e75ef8881e6cebafe23846d5ca345c156417d3b09147db4c76262b2936] <==
	I0924 18:51:21.054783       1 options.go:228] external host was not specified, using 192.168.39.7
	I0924 18:51:21.064779       1 server.go:142] Version: v1.31.1
	I0924 18:51:21.064849       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:51:21.784869       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0924 18:51:21.805644       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0924 18:51:21.806182       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0924 18:51:21.806213       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0924 18:51:21.806395       1 instance.go:232] Using reconciler: lease
	W0924 18:51:41.778875       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0924 18:51:41.784381       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0924 18:51:41.807272       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0924 18:51:41.807364       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [33767687f698e19ef02f2b404b1b8efba384d05dab96467efec9c8782611ea69] <==
	I0924 18:54:40.927070       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:54:40.947840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:54:41.007665       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.650448ms"
	I0924 18:54:41.009059       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.056µs"
	I0924 18:54:41.091046       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	I0924 18:54:46.011327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685475-m04"
	E0924 18:54:50.979615       1 gc_controller.go:151] "Failed to get node" err="node \"ha-685475-m03\" not found" logger="pod-garbage-collector-controller" node="ha-685475-m03"
	E0924 18:54:50.979940       1 gc_controller.go:151] "Failed to get node" err="node \"ha-685475-m03\" not found" logger="pod-garbage-collector-controller" node="ha-685475-m03"
	E0924 18:54:50.980009       1 gc_controller.go:151] "Failed to get node" err="node \"ha-685475-m03\" not found" logger="pod-garbage-collector-controller" node="ha-685475-m03"
	E0924 18:54:50.980042       1 gc_controller.go:151] "Failed to get node" err="node \"ha-685475-m03\" not found" logger="pod-garbage-collector-controller" node="ha-685475-m03"
	E0924 18:54:50.980072       1 gc_controller.go:151] "Failed to get node" err="node \"ha-685475-m03\" not found" logger="pod-garbage-collector-controller" node="ha-685475-m03"
	I0924 18:54:50.989948       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-685475-m03"
	I0924 18:54:51.035550       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-685475-m03"
	I0924 18:54:51.035631       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-685475-m03"
	I0924 18:54:51.068786       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-685475-m03"
	I0924 18:54:51.068857       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-7w5dn"
	I0924 18:54:51.093057       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-7w5dn"
	I0924 18:54:51.093172       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mzlfj"
	I0924 18:54:51.118973       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mzlfj"
	I0924 18:54:51.119005       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-685475-m03"
	I0924 18:54:51.156323       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-685475-m03"
	I0924 18:54:51.156358       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-685475-m03"
	I0924 18:54:51.181838       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-685475-m03"
	I0924 18:54:51.181965       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-685475-m03"
	I0924 18:54:51.211148       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-685475-m03"
	
	
	==> kube-controller-manager [4411ef38af3f8e892cb7fb1aff0ea178734c6baf3f9dc0486dec91905b263da9] <==
	I0924 18:51:22.413437       1 serving.go:386] Generated self-signed cert in-memory
	I0924 18:51:22.649049       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0924 18:51:22.649144       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:51:22.650842       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0924 18:51:22.650990       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0924 18:51:22.651462       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0924 18:51:22.651556       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0924 18:51:42.812333       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.7:8443/healthz\": dial tcp 192.168.39.7:8443: connect: connection refused"
	
	
	==> kube-proxy [624ae9ed966d269bcdf339f5aed90069a73b9851203deb7e89f7ce9d3d9ce3e8] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 18:51:23.518303       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-685475\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 18:51:26.591088       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-685475\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 18:51:29.662200       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-685475\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 18:51:35.807300       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-685475\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 18:51:45.022640       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-685475\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 18:52:03.455843       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-685475\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0924 18:52:03.455960       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0924 18:52:03.456032       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 18:52:03.491361       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 18:52:03.491449       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 18:52:03.491485       1 server_linux.go:169] "Using iptables Proxier"
	I0924 18:52:03.494272       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 18:52:03.494629       1 server.go:483] "Version info" version="v1.31.1"
	I0924 18:52:03.494850       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:52:03.496326       1 config.go:199] "Starting service config controller"
	I0924 18:52:03.496372       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 18:52:03.496441       1 config.go:105] "Starting endpoint slice config controller"
	I0924 18:52:03.496475       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 18:52:03.496456       1 config.go:328] "Starting node config controller"
	I0924 18:52:03.497267       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 18:52:05.797169       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 18:52:05.797308       1 shared_informer.go:320] Caches are synced for node config
	I0924 18:52:05.797339       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [9ea87ecceac1c4d9d7efcb8156848f46317ff07ec91ace1b5ab7255030e1a9b9] <==
	W0924 18:48:19.838522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:48:19.838576       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0924 18:48:19.838529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:48:28.030159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:48:28.030301       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:48:28.030517       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	W0924 18:48:28.030580       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:48:28.030634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0924 18:48:28.030636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:48:37.632170       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:48:37.632418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:48:37.632514       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:48:37.632724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:48:40.703562       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:48:40.703713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:48:59.135603       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:48:59.135658       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:49:02.206735       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:49:02.206793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:49:05.278907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:49:05.279023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:49:29.855020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:49:29.855121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-685475&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 18:49:42.142362       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 18:49:42.142707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1697\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [7c127d68bc74bd71c0a1e6e422d8d417299f860f8790701ec8a1dfa5af2abc18] <==
	W0924 18:51:58.734328       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.7:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:51:58.734489       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.7:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:51:59.482389       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.7:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:51:59.482467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.7:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:00.305853       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.7:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:00.306300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.7:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:00.317411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.7:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:00.317560       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.7:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:00.704339       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.7:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:00.704421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.7:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:00.740626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.7:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:00.740711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.7:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:01.075553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.7:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:01.075680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.7:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:01.499224       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.7:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:01.499362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.7:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:02.913549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.7:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:02.913660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.7:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:02.993273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.7:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:02.993411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.7:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:03.899703       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.7:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:03.899795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.7:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	W0924 18:52:04.940409       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.7:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.7:8443: connect: connection refused
	E0924 18:52:04.940545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.7:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.7:8443: connect: connection refused" logger="UnhandledError"
	I0924 18:52:14.218620       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e62a02dab307566894391e9df91ff2e84db1aa37e24afe8d4b58dffc99bd78cc] <==
	E0924 18:41:17.231072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:41:17.384731       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 18:41:17.384781       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 18:41:17.385753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 18:41:17.385816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0924 18:41:20.277859       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0924 18:43:50.159728       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w6g8l\": pod busybox-7dff88458-w6g8l is already assigned to node \"ha-685475-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-w6g8l" node="ha-685475-m02"
	E0924 18:43:50.159906       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w6g8l\": pod busybox-7dff88458-w6g8l is already assigned to node \"ha-685475-m02\"" pod="default/busybox-7dff88458-w6g8l"
	E0924 18:43:50.160616       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hmkfk\": pod busybox-7dff88458-hmkfk is already assigned to node \"ha-685475\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hmkfk" node="ha-685475"
	E0924 18:43:50.160683       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hmkfk\": pod busybox-7dff88458-hmkfk is already assigned to node \"ha-685475\"" pod="default/busybox-7dff88458-hmkfk"
	E0924 18:44:24.296261       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9m62z\": pod kube-proxy-9m62z is already assigned to node \"ha-685475-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9m62z" node="ha-685475-m04"
	E0924 18:44:24.296334       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d172ae09-1eb7-4e5d-a5a1-e865b926b6eb(kube-system/kube-proxy-9m62z) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-9m62z"
	E0924 18:44:24.296350       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9m62z\": pod kube-proxy-9m62z is already assigned to node \"ha-685475-m04\"" pod="kube-system/kube-proxy-9m62z"
	I0924 18:44:24.296367       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9m62z" node="ha-685475-m04"
	E0924 18:49:33.050251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0924 18:49:36.566629       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0924 18:49:36.907142       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0924 18:49:37.791916       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0924 18:49:37.973674       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0924 18:49:38.551229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0924 18:49:40.418969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0924 18:49:40.596778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0924 18:49:42.179148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0924 18:49:42.271632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0924 18:49:42.334956       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 24 18:55:09 ha-685475 kubelet[1306]: E0924 18:55:09.399368    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204109397838396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:55:19 ha-685475 kubelet[1306]: E0924 18:55:19.240045    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 18:55:19 ha-685475 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 18:55:19 ha-685475 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 18:55:19 ha-685475 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 18:55:19 ha-685475 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 18:55:19 ha-685475 kubelet[1306]: E0924 18:55:19.402625    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204119402352596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:55:19 ha-685475 kubelet[1306]: E0924 18:55:19.402663    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204119402352596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:55:29 ha-685475 kubelet[1306]: E0924 18:55:29.405782    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204129404900532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:55:29 ha-685475 kubelet[1306]: E0924 18:55:29.408507    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204129404900532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:55:39 ha-685475 kubelet[1306]: E0924 18:55:39.411668    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204139411441580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:55:39 ha-685475 kubelet[1306]: E0924 18:55:39.411712    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204139411441580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:55:49 ha-685475 kubelet[1306]: E0924 18:55:49.413672    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204149412965729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:55:49 ha-685475 kubelet[1306]: E0924 18:55:49.413727    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204149412965729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:55:59 ha-685475 kubelet[1306]: E0924 18:55:59.414996    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204159414632844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:55:59 ha-685475 kubelet[1306]: E0924 18:55:59.415021    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204159414632844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:56:09 ha-685475 kubelet[1306]: E0924 18:56:09.419722    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204169417458275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:56:09 ha-685475 kubelet[1306]: E0924 18:56:09.419747    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204169417458275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:56:19 ha-685475 kubelet[1306]: E0924 18:56:19.239506    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 18:56:19 ha-685475 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 18:56:19 ha-685475 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 18:56:19 ha-685475 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 18:56:19 ha-685475 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 18:56:19 ha-685475 kubelet[1306]: E0924 18:56:19.422636    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204179422157855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 18:56:19 ha-685475 kubelet[1306]: E0924 18:56:19.422661    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727204179422157855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 18:56:24.590952   30852 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19700-3751/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-685475 -n ha-685475
helpers_test.go:261: (dbg) Run:  kubectl --context ha-685475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (323.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-624105
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-624105
E0924 19:12:24.266988   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-624105: exit status 82 (2m1.732113374s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-624105-m03"  ...
	* Stopping node "multinode-624105-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-624105" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-624105 --wait=true -v=8 --alsologtostderr
E0924 19:14:49.790487   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-624105 --wait=true -v=8 --alsologtostderr: (3m19.340991725s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-624105
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-624105 -n multinode-624105
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-624105 logs -n 25: (1.364281271s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-624105 cp multinode-624105-m02:/home/docker/cp-test.txt                       | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3111081610/001/cp-test_multinode-624105-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-624105 cp multinode-624105-m02:/home/docker/cp-test.txt                       | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105:/home/docker/cp-test_multinode-624105-m02_multinode-624105.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n multinode-624105 sudo cat                                       | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | /home/docker/cp-test_multinode-624105-m02_multinode-624105.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-624105 cp multinode-624105-m02:/home/docker/cp-test.txt                       | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m03:/home/docker/cp-test_multinode-624105-m02_multinode-624105-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n multinode-624105-m03 sudo cat                                   | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | /home/docker/cp-test_multinode-624105-m02_multinode-624105-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-624105 cp testdata/cp-test.txt                                                | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-624105 cp multinode-624105-m03:/home/docker/cp-test.txt                       | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3111081610/001/cp-test_multinode-624105-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-624105 cp multinode-624105-m03:/home/docker/cp-test.txt                       | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105:/home/docker/cp-test_multinode-624105-m03_multinode-624105.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n multinode-624105 sudo cat                                       | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | /home/docker/cp-test_multinode-624105-m03_multinode-624105.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-624105 cp multinode-624105-m03:/home/docker/cp-test.txt                       | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m02:/home/docker/cp-test_multinode-624105-m03_multinode-624105-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n multinode-624105-m02 sudo cat                                   | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | /home/docker/cp-test_multinode-624105-m03_multinode-624105-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-624105 node stop m03                                                          | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	| node    | multinode-624105 node start                                                             | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:11 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-624105                                                                | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:11 UTC |                     |
	| stop    | -p multinode-624105                                                                     | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:11 UTC |                     |
	| start   | -p multinode-624105                                                                     | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:13 UTC | 24 Sep 24 19:16 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-624105                                                                | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:16 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 19:13:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 19:13:15.483380   40358 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:13:15.483517   40358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:13:15.483527   40358 out.go:358] Setting ErrFile to fd 2...
	I0924 19:13:15.483534   40358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:13:15.483748   40358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:13:15.484327   40358 out.go:352] Setting JSON to false
	I0924 19:13:15.485213   40358 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3346,"bootTime":1727201849,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 19:13:15.485306   40358 start.go:139] virtualization: kvm guest
	I0924 19:13:15.487587   40358 out.go:177] * [multinode-624105] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 19:13:15.489076   40358 notify.go:220] Checking for updates...
	I0924 19:13:15.489103   40358 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 19:13:15.490692   40358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 19:13:15.492027   40358 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:13:15.493313   40358 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:13:15.494436   40358 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 19:13:15.495594   40358 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 19:13:15.497245   40358 config.go:182] Loaded profile config "multinode-624105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:13:15.497332   40358 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 19:13:15.497800   40358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:13:15.497840   40358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:13:15.512831   40358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46157
	I0924 19:13:15.513325   40358 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:13:15.513874   40358 main.go:141] libmachine: Using API Version  1
	I0924 19:13:15.513924   40358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:13:15.514289   40358 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:13:15.514501   40358 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:13:15.549364   40358 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 19:13:15.550456   40358 start.go:297] selected driver: kvm2
	I0924 19:13:15.550476   40358 start.go:901] validating driver "kvm2" against &{Name:multinode-624105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-624105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.64 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:13:15.550620   40358 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 19:13:15.551020   40358 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:13:15.551112   40358 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 19:13:15.566344   40358 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 19:13:15.567100   40358 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:13:15.567129   40358 cni.go:84] Creating CNI manager for ""
	I0924 19:13:15.567163   40358 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0924 19:13:15.567232   40358 start.go:340] cluster config:
	{Name:multinode-624105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-624105 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.64 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:13:15.567367   40358 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:13:15.569214   40358 out.go:177] * Starting "multinode-624105" primary control-plane node in "multinode-624105" cluster
	I0924 19:13:15.570458   40358 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:13:15.570496   40358 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 19:13:15.570508   40358 cache.go:56] Caching tarball of preloaded images
	I0924 19:13:15.570581   40358 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 19:13:15.570611   40358 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 19:13:15.570728   40358 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/config.json ...
	I0924 19:13:15.570972   40358 start.go:360] acquireMachinesLock for multinode-624105: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:13:15.571026   40358 start.go:364] duration metric: took 35.55µs to acquireMachinesLock for "multinode-624105"
	I0924 19:13:15.571045   40358 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:13:15.571054   40358 fix.go:54] fixHost starting: 
	I0924 19:13:15.571365   40358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:13:15.571408   40358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:13:15.585794   40358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
	I0924 19:13:15.586168   40358 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:13:15.586583   40358 main.go:141] libmachine: Using API Version  1
	I0924 19:13:15.586601   40358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:13:15.587009   40358 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:13:15.587199   40358 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:13:15.587328   40358 main.go:141] libmachine: (multinode-624105) Calling .GetState
	I0924 19:13:15.588651   40358 fix.go:112] recreateIfNeeded on multinode-624105: state=Running err=<nil>
	W0924 19:13:15.588669   40358 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:13:15.590420   40358 out.go:177] * Updating the running kvm2 "multinode-624105" VM ...
	I0924 19:13:15.591556   40358 machine.go:93] provisionDockerMachine start ...
	I0924 19:13:15.591572   40358 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:13:15.591734   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:13:15.593945   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.594314   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:13:15.594332   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.594500   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:13:15.594669   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:15.594798   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:15.594926   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:13:15.595076   40358 main.go:141] libmachine: Using SSH client type: native
	I0924 19:13:15.595222   40358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0924 19:13:15.595232   40358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:13:15.698975   40358 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-624105
	
	I0924 19:13:15.699006   40358 main.go:141] libmachine: (multinode-624105) Calling .GetMachineName
	I0924 19:13:15.699279   40358 buildroot.go:166] provisioning hostname "multinode-624105"
	I0924 19:13:15.699304   40358 main.go:141] libmachine: (multinode-624105) Calling .GetMachineName
	I0924 19:13:15.699491   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:13:15.702294   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.702849   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:13:15.702892   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.702991   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:13:15.703191   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:15.703331   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:15.703461   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:13:15.703660   40358 main.go:141] libmachine: Using SSH client type: native
	I0924 19:13:15.703872   40358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0924 19:13:15.703888   40358 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-624105 && echo "multinode-624105" | sudo tee /etc/hostname
	I0924 19:13:15.819337   40358 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-624105
	
	I0924 19:13:15.819376   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:13:15.822034   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.822396   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:13:15.822422   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.822614   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:13:15.822799   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:15.822944   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:15.823059   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:13:15.823211   40358 main.go:141] libmachine: Using SSH client type: native
	I0924 19:13:15.823373   40358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0924 19:13:15.823389   40358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-624105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-624105/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-624105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:13:15.927243   40358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:13:15.927284   40358 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:13:15.927309   40358 buildroot.go:174] setting up certificates
	I0924 19:13:15.927321   40358 provision.go:84] configureAuth start
	I0924 19:13:15.927332   40358 main.go:141] libmachine: (multinode-624105) Calling .GetMachineName
	I0924 19:13:15.927588   40358 main.go:141] libmachine: (multinode-624105) Calling .GetIP
	I0924 19:13:15.930204   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.930728   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:13:15.930758   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.930945   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:13:15.933185   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.933519   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:13:15.933550   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.933709   40358 provision.go:143] copyHostCerts
	I0924 19:13:15.933737   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:13:15.933764   40358 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:13:15.933773   40358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:13:15.933841   40358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:13:15.933916   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:13:15.933932   40358 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:13:15.933938   40358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:13:15.933961   40358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:13:15.934001   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:13:15.934017   40358 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:13:15.934023   40358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:13:15.934043   40358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:13:15.934089   40358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.multinode-624105 san=[127.0.0.1 192.168.39.206 localhost minikube multinode-624105]
	I0924 19:13:16.010522   40358 provision.go:177] copyRemoteCerts
	I0924 19:13:16.010579   40358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:13:16.010600   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:13:16.013131   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:16.013468   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:13:16.013500   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:16.013642   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:13:16.013805   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:16.013957   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:13:16.014110   40358 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/multinode-624105/id_rsa Username:docker}
	I0924 19:13:16.096120   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 19:13:16.096195   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:13:16.118408   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 19:13:16.118459   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0924 19:13:16.141550   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 19:13:16.141610   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:13:16.163790   40358 provision.go:87] duration metric: took 236.45809ms to configureAuth
	I0924 19:13:16.163813   40358 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:13:16.164008   40358 config.go:182] Loaded profile config "multinode-624105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:13:16.164076   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:13:16.166523   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:16.167118   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:13:16.167150   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:16.167310   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:13:16.167492   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:16.167645   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:16.167807   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:13:16.167947   40358 main.go:141] libmachine: Using SSH client type: native
	I0924 19:13:16.168133   40358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0924 19:13:16.168149   40358 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:14:46.869475   40358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:14:46.869526   40358 machine.go:96] duration metric: took 1m31.277957272s to provisionDockerMachine
	I0924 19:14:46.869542   40358 start.go:293] postStartSetup for "multinode-624105" (driver="kvm2")
	I0924 19:14:46.869565   40358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:14:46.869611   40358 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:14:46.869942   40358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:14:46.869977   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:14:46.873216   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:46.873638   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:14:46.873664   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:46.873805   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:14:46.873995   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:14:46.874159   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:14:46.874276   40358 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/multinode-624105/id_rsa Username:docker}
	I0924 19:14:46.956705   40358 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:14:46.960795   40358 command_runner.go:130] > NAME=Buildroot
	I0924 19:14:46.960810   40358 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0924 19:14:46.960815   40358 command_runner.go:130] > ID=buildroot
	I0924 19:14:46.960819   40358 command_runner.go:130] > VERSION_ID=2023.02.9
	I0924 19:14:46.960824   40358 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0924 19:14:46.960853   40358 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:14:46.960870   40358 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:14:46.960936   40358 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:14:46.961006   40358 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:14:46.961017   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /etc/ssl/certs/109492.pem
	I0924 19:14:46.961095   40358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:14:46.969889   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:14:46.992119   40358 start.go:296] duration metric: took 122.564038ms for postStartSetup
	I0924 19:14:46.992165   40358 fix.go:56] duration metric: took 1m31.421112791s for fixHost
	I0924 19:14:46.992196   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:14:46.995170   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:46.995557   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:14:46.995584   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:46.995743   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:14:46.995912   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:14:46.996058   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:14:46.996180   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:14:46.996403   40358 main.go:141] libmachine: Using SSH client type: native
	I0924 19:14:46.996614   40358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0924 19:14:46.996627   40358 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:14:47.099135   40358 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727205287.080707152
	
	I0924 19:14:47.099156   40358 fix.go:216] guest clock: 1727205287.080707152
	I0924 19:14:47.099164   40358 fix.go:229] Guest: 2024-09-24 19:14:47.080707152 +0000 UTC Remote: 2024-09-24 19:14:46.992174081 +0000 UTC m=+91.543141311 (delta=88.533071ms)
	I0924 19:14:47.099194   40358 fix.go:200] guest clock delta is within tolerance: 88.533071ms
	I0924 19:14:47.099200   40358 start.go:83] releasing machines lock for "multinode-624105", held for 1m31.528163017s
	I0924 19:14:47.099223   40358 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:14:47.099454   40358 main.go:141] libmachine: (multinode-624105) Calling .GetIP
	I0924 19:14:47.102316   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:47.102729   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:14:47.102759   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:47.102931   40358 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:14:47.103397   40358 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:14:47.103546   40358 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:14:47.103643   40358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:14:47.103687   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:14:47.103741   40358 ssh_runner.go:195] Run: cat /version.json
	I0924 19:14:47.103761   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:14:47.106181   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:47.106522   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:14:47.106549   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:47.106598   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:47.106721   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:14:47.106891   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:14:47.107034   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:14:47.107082   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:14:47.107102   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:47.107201   40358 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/multinode-624105/id_rsa Username:docker}
	I0924 19:14:47.107263   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:14:47.107398   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:14:47.107519   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:14:47.107651   40358 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/multinode-624105/id_rsa Username:docker}
	I0924 19:14:47.205201   40358 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0924 19:14:47.205239   40358 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I0924 19:14:47.205392   40358 ssh_runner.go:195] Run: systemctl --version
	I0924 19:14:47.210822   40358 command_runner.go:130] > systemd 252 (252)
	I0924 19:14:47.210877   40358 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0924 19:14:47.210966   40358 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:14:47.358961   40358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0924 19:14:47.372489   40358 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0924 19:14:47.372553   40358 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:14:47.372607   40358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:14:47.383102   40358 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0924 19:14:47.383128   40358 start.go:495] detecting cgroup driver to use...
	I0924 19:14:47.383200   40358 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:14:47.401448   40358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:14:47.416827   40358 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:14:47.416890   40358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:14:47.432271   40358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:14:47.447001   40358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:14:47.594228   40358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:14:47.736944   40358 docker.go:233] disabling docker service ...
	I0924 19:14:47.737008   40358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:14:47.753225   40358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:14:47.766218   40358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:14:47.898567   40358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:14:48.034620   40358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:14:48.049035   40358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:14:48.065477   40358 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0924 19:14:48.065531   40358 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:14:48.065591   40358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:14:48.075363   40358 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:14:48.075419   40358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:14:48.091883   40358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:14:48.113599   40358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:14:48.133509   40358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:14:48.143720   40358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:14:48.153373   40358 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:14:48.163606   40358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:14:48.173262   40358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:14:48.181937   40358 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0924 19:14:48.182015   40358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:14:48.190769   40358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:14:48.333272   40358 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:14:48.519904   40358 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:14:48.519961   40358 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:14:48.524364   40358 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0924 19:14:48.524388   40358 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0924 19:14:48.524397   40358 command_runner.go:130] > Device: 0,22	Inode: 1301        Links: 1
	I0924 19:14:48.524408   40358 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0924 19:14:48.524417   40358 command_runner.go:130] > Access: 2024-09-24 19:14:48.400170230 +0000
	I0924 19:14:48.524428   40358 command_runner.go:130] > Modify: 2024-09-24 19:14:48.400170230 +0000
	I0924 19:14:48.524435   40358 command_runner.go:130] > Change: 2024-09-24 19:14:48.400170230 +0000
	I0924 19:14:48.524439   40358 command_runner.go:130] >  Birth: -
	I0924 19:14:48.524543   40358 start.go:563] Will wait 60s for crictl version
	I0924 19:14:48.524603   40358 ssh_runner.go:195] Run: which crictl
	I0924 19:14:48.527973   40358 command_runner.go:130] > /usr/bin/crictl
	I0924 19:14:48.528027   40358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:14:48.560409   40358 command_runner.go:130] > Version:  0.1.0
	I0924 19:14:48.560430   40358 command_runner.go:130] > RuntimeName:  cri-o
	I0924 19:14:48.560435   40358 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0924 19:14:48.560441   40358 command_runner.go:130] > RuntimeApiVersion:  v1
	I0924 19:14:48.561489   40358 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:14:48.561574   40358 ssh_runner.go:195] Run: crio --version
	I0924 19:14:48.590182   40358 command_runner.go:130] > crio version 1.29.1
	I0924 19:14:48.590203   40358 command_runner.go:130] > Version:        1.29.1
	I0924 19:14:48.590209   40358 command_runner.go:130] > GitCommit:      unknown
	I0924 19:14:48.590214   40358 command_runner.go:130] > GitCommitDate:  unknown
	I0924 19:14:48.590218   40358 command_runner.go:130] > GitTreeState:   clean
	I0924 19:14:48.590223   40358 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0924 19:14:48.590228   40358 command_runner.go:130] > GoVersion:      go1.21.6
	I0924 19:14:48.590232   40358 command_runner.go:130] > Compiler:       gc
	I0924 19:14:48.590235   40358 command_runner.go:130] > Platform:       linux/amd64
	I0924 19:14:48.590266   40358 command_runner.go:130] > Linkmode:       dynamic
	I0924 19:14:48.590273   40358 command_runner.go:130] > BuildTags:      
	I0924 19:14:48.590278   40358 command_runner.go:130] >   containers_image_ostree_stub
	I0924 19:14:48.590281   40358 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0924 19:14:48.590285   40358 command_runner.go:130] >   btrfs_noversion
	I0924 19:14:48.590292   40358 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0924 19:14:48.590296   40358 command_runner.go:130] >   libdm_no_deferred_remove
	I0924 19:14:48.590300   40358 command_runner.go:130] >   seccomp
	I0924 19:14:48.590303   40358 command_runner.go:130] > LDFlags:          unknown
	I0924 19:14:48.590310   40358 command_runner.go:130] > SeccompEnabled:   true
	I0924 19:14:48.590313   40358 command_runner.go:130] > AppArmorEnabled:  false
	I0924 19:14:48.591468   40358 ssh_runner.go:195] Run: crio --version
	I0924 19:14:48.621554   40358 command_runner.go:130] > crio version 1.29.1
	I0924 19:14:48.621580   40358 command_runner.go:130] > Version:        1.29.1
	I0924 19:14:48.621586   40358 command_runner.go:130] > GitCommit:      unknown
	I0924 19:14:48.621590   40358 command_runner.go:130] > GitCommitDate:  unknown
	I0924 19:14:48.621595   40358 command_runner.go:130] > GitTreeState:   clean
	I0924 19:14:48.621603   40358 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0924 19:14:48.621611   40358 command_runner.go:130] > GoVersion:      go1.21.6
	I0924 19:14:48.621617   40358 command_runner.go:130] > Compiler:       gc
	I0924 19:14:48.621626   40358 command_runner.go:130] > Platform:       linux/amd64
	I0924 19:14:48.621638   40358 command_runner.go:130] > Linkmode:       dynamic
	I0924 19:14:48.621645   40358 command_runner.go:130] > BuildTags:      
	I0924 19:14:48.621650   40358 command_runner.go:130] >   containers_image_ostree_stub
	I0924 19:14:48.621655   40358 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0924 19:14:48.621659   40358 command_runner.go:130] >   btrfs_noversion
	I0924 19:14:48.621665   40358 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0924 19:14:48.621669   40358 command_runner.go:130] >   libdm_no_deferred_remove
	I0924 19:14:48.621674   40358 command_runner.go:130] >   seccomp
	I0924 19:14:48.621677   40358 command_runner.go:130] > LDFlags:          unknown
	I0924 19:14:48.621682   40358 command_runner.go:130] > SeccompEnabled:   true
	I0924 19:14:48.621686   40358 command_runner.go:130] > AppArmorEnabled:  false
	I0924 19:14:48.624655   40358 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:14:48.625868   40358 main.go:141] libmachine: (multinode-624105) Calling .GetIP
	I0924 19:14:48.628401   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:48.628889   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:14:48.628911   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:48.629276   40358 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 19:14:48.633069   40358 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0924 19:14:48.633251   40358 kubeadm.go:883] updating cluster {Name:multinode-624105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-624105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.64 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:14:48.633386   40358 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:14:48.633429   40358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:14:48.669706   40358 command_runner.go:130] > {
	I0924 19:14:48.669726   40358 command_runner.go:130] >   "images": [
	I0924 19:14:48.669730   40358 command_runner.go:130] >     {
	I0924 19:14:48.669738   40358 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0924 19:14:48.669742   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.669748   40358 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0924 19:14:48.669752   40358 command_runner.go:130] >       ],
	I0924 19:14:48.669756   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.669766   40358 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0924 19:14:48.669773   40358 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0924 19:14:48.669777   40358 command_runner.go:130] >       ],
	I0924 19:14:48.669781   40358 command_runner.go:130] >       "size": "87190579",
	I0924 19:14:48.669788   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.669792   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.669799   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.669805   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.669810   40358 command_runner.go:130] >     },
	I0924 19:14:48.669815   40358 command_runner.go:130] >     {
	I0924 19:14:48.669820   40358 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0924 19:14:48.669828   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.669838   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0924 19:14:48.669847   40358 command_runner.go:130] >       ],
	I0924 19:14:48.669852   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.669867   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0924 19:14:48.669882   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0924 19:14:48.669888   40358 command_runner.go:130] >       ],
	I0924 19:14:48.669893   40358 command_runner.go:130] >       "size": "1363676",
	I0924 19:14:48.669899   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.669916   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.669926   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.669932   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.669942   40358 command_runner.go:130] >     },
	I0924 19:14:48.669947   40358 command_runner.go:130] >     {
	I0924 19:14:48.669960   40358 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0924 19:14:48.669969   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.669981   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0924 19:14:48.669989   40358 command_runner.go:130] >       ],
	I0924 19:14:48.669996   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.670010   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0924 19:14:48.670021   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0924 19:14:48.670027   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670032   40358 command_runner.go:130] >       "size": "31470524",
	I0924 19:14:48.670040   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.670049   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.670056   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.670068   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.670077   40358 command_runner.go:130] >     },
	I0924 19:14:48.670085   40358 command_runner.go:130] >     {
	I0924 19:14:48.670096   40358 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0924 19:14:48.670105   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.670116   40358 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0924 19:14:48.670124   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670128   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.670143   40358 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0924 19:14:48.670164   40358 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0924 19:14:48.670173   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670184   40358 command_runner.go:130] >       "size": "63273227",
	I0924 19:14:48.670193   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.670202   40358 command_runner.go:130] >       "username": "nonroot",
	I0924 19:14:48.670212   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.670220   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.670226   40358 command_runner.go:130] >     },
	I0924 19:14:48.670230   40358 command_runner.go:130] >     {
	I0924 19:14:48.670242   40358 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0924 19:14:48.670251   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.670262   40358 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0924 19:14:48.670270   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670282   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.670295   40358 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0924 19:14:48.670309   40358 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0924 19:14:48.670318   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670323   40358 command_runner.go:130] >       "size": "149009664",
	I0924 19:14:48.670331   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.670338   40358 command_runner.go:130] >         "value": "0"
	I0924 19:14:48.670347   40358 command_runner.go:130] >       },
	I0924 19:14:48.670356   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.670366   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.670376   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.670384   40358 command_runner.go:130] >     },
	I0924 19:14:48.670393   40358 command_runner.go:130] >     {
	I0924 19:14:48.670406   40358 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0924 19:14:48.670414   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.670423   40358 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0924 19:14:48.670428   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670437   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.670451   40358 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0924 19:14:48.670468   40358 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0924 19:14:48.670476   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670486   40358 command_runner.go:130] >       "size": "95237600",
	I0924 19:14:48.670495   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.670505   40358 command_runner.go:130] >         "value": "0"
	I0924 19:14:48.670512   40358 command_runner.go:130] >       },
	I0924 19:14:48.670519   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.670524   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.670533   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.670541   40358 command_runner.go:130] >     },
	I0924 19:14:48.670547   40358 command_runner.go:130] >     {
	I0924 19:14:48.670566   40358 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0924 19:14:48.670575   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.670584   40358 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0924 19:14:48.670593   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670600   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.670613   40358 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0924 19:14:48.670625   40358 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0924 19:14:48.670634   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670643   40358 command_runner.go:130] >       "size": "89437508",
	I0924 19:14:48.670648   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.670656   40358 command_runner.go:130] >         "value": "0"
	I0924 19:14:48.670663   40358 command_runner.go:130] >       },
	I0924 19:14:48.670672   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.670678   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.670687   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.670693   40358 command_runner.go:130] >     },
	I0924 19:14:48.670701   40358 command_runner.go:130] >     {
	I0924 19:14:48.670711   40358 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0924 19:14:48.670719   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.670724   40358 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0924 19:14:48.670728   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670733   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.670749   40358 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0924 19:14:48.670758   40358 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0924 19:14:48.670762   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670766   40358 command_runner.go:130] >       "size": "92733849",
	I0924 19:14:48.670772   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.670780   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.670786   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.670800   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.670806   40358 command_runner.go:130] >     },
	I0924 19:14:48.670811   40358 command_runner.go:130] >     {
	I0924 19:14:48.670820   40358 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0924 19:14:48.670837   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.670845   40358 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0924 19:14:48.670850   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670854   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.670861   40358 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0924 19:14:48.670873   40358 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0924 19:14:48.670877   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670882   40358 command_runner.go:130] >       "size": "68420934",
	I0924 19:14:48.670885   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.670889   40358 command_runner.go:130] >         "value": "0"
	I0924 19:14:48.670892   40358 command_runner.go:130] >       },
	I0924 19:14:48.670896   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.670899   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.670903   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.670907   40358 command_runner.go:130] >     },
	I0924 19:14:48.670909   40358 command_runner.go:130] >     {
	I0924 19:14:48.670915   40358 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0924 19:14:48.670919   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.670923   40358 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0924 19:14:48.670927   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670930   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.670938   40358 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0924 19:14:48.670945   40358 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0924 19:14:48.670949   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670955   40358 command_runner.go:130] >       "size": "742080",
	I0924 19:14:48.670958   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.670962   40358 command_runner.go:130] >         "value": "65535"
	I0924 19:14:48.670968   40358 command_runner.go:130] >       },
	I0924 19:14:48.670972   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.670978   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.670982   40358 command_runner.go:130] >       "pinned": true
	I0924 19:14:48.670987   40358 command_runner.go:130] >     }
	I0924 19:14:48.670990   40358 command_runner.go:130] >   ]
	I0924 19:14:48.670995   40358 command_runner.go:130] > }
	I0924 19:14:48.671176   40358 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 19:14:48.671187   40358 crio.go:433] Images already preloaded, skipping extraction
	I0924 19:14:48.671235   40358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:14:48.705211   40358 command_runner.go:130] > {
	I0924 19:14:48.705232   40358 command_runner.go:130] >   "images": [
	I0924 19:14:48.705253   40358 command_runner.go:130] >     {
	I0924 19:14:48.705262   40358 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0924 19:14:48.705266   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.705272   40358 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0924 19:14:48.705275   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705281   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.705293   40358 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0924 19:14:48.705307   40358 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0924 19:14:48.705315   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705323   40358 command_runner.go:130] >       "size": "87190579",
	I0924 19:14:48.705333   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.705339   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.705348   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.705354   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.705359   40358 command_runner.go:130] >     },
	I0924 19:14:48.705363   40358 command_runner.go:130] >     {
	I0924 19:14:48.705369   40358 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0924 19:14:48.705376   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.705385   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0924 19:14:48.705393   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705400   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.705415   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0924 19:14:48.705429   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0924 19:14:48.705438   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705446   40358 command_runner.go:130] >       "size": "1363676",
	I0924 19:14:48.705453   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.705460   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.705466   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.705472   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.705480   40358 command_runner.go:130] >     },
	I0924 19:14:48.705489   40358 command_runner.go:130] >     {
	I0924 19:14:48.705502   40358 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0924 19:14:48.705512   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.705523   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0924 19:14:48.705531   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705539   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.705547   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0924 19:14:48.705560   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0924 19:14:48.705569   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705581   40358 command_runner.go:130] >       "size": "31470524",
	I0924 19:14:48.705591   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.705607   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.705618   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.705628   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.705635   40358 command_runner.go:130] >     },
	I0924 19:14:48.705639   40358 command_runner.go:130] >     {
	I0924 19:14:48.705650   40358 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0924 19:14:48.705660   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.705669   40358 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0924 19:14:48.705678   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705687   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.705701   40358 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0924 19:14:48.705719   40358 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0924 19:14:48.705727   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705736   40358 command_runner.go:130] >       "size": "63273227",
	I0924 19:14:48.705745   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.705755   40358 command_runner.go:130] >       "username": "nonroot",
	I0924 19:14:48.705768   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.705777   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.705786   40358 command_runner.go:130] >     },
	I0924 19:14:48.705795   40358 command_runner.go:130] >     {
	I0924 19:14:48.705808   40358 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0924 19:14:48.705817   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.705826   40358 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0924 19:14:48.705832   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705838   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.705851   40358 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0924 19:14:48.705865   40358 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0924 19:14:48.705873   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705883   40358 command_runner.go:130] >       "size": "149009664",
	I0924 19:14:48.705892   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.705901   40358 command_runner.go:130] >         "value": "0"
	I0924 19:14:48.705909   40358 command_runner.go:130] >       },
	I0924 19:14:48.705918   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.705927   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.705935   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.705938   40358 command_runner.go:130] >     },
	I0924 19:14:48.705946   40358 command_runner.go:130] >     {
	I0924 19:14:48.705956   40358 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0924 19:14:48.705965   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.705974   40358 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0924 19:14:48.705984   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705993   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.706006   40358 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0924 19:14:48.706021   40358 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0924 19:14:48.706029   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706035   40358 command_runner.go:130] >       "size": "95237600",
	I0924 19:14:48.706042   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.706047   40358 command_runner.go:130] >         "value": "0"
	I0924 19:14:48.706055   40358 command_runner.go:130] >       },
	I0924 19:14:48.706064   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.706072   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.706082   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.706090   40358 command_runner.go:130] >     },
	I0924 19:14:48.706097   40358 command_runner.go:130] >     {
	I0924 19:14:48.706109   40358 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0924 19:14:48.706118   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.706129   40358 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0924 19:14:48.706136   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706140   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.706154   40358 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0924 19:14:48.706169   40358 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0924 19:14:48.706181   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706191   40358 command_runner.go:130] >       "size": "89437508",
	I0924 19:14:48.706200   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.706209   40358 command_runner.go:130] >         "value": "0"
	I0924 19:14:48.706218   40358 command_runner.go:130] >       },
	I0924 19:14:48.706225   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.706233   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.706238   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.706244   40358 command_runner.go:130] >     },
	I0924 19:14:48.706250   40358 command_runner.go:130] >     {
	I0924 19:14:48.706263   40358 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0924 19:14:48.706273   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.706284   40358 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0924 19:14:48.706293   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706301   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.706322   40358 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0924 19:14:48.706334   40358 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0924 19:14:48.706339   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706349   40358 command_runner.go:130] >       "size": "92733849",
	I0924 19:14:48.706358   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.706367   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.706375   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.706383   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.706390   40358 command_runner.go:130] >     },
	I0924 19:14:48.706399   40358 command_runner.go:130] >     {
	I0924 19:14:48.706408   40358 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0924 19:14:48.706417   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.706427   40358 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0924 19:14:48.706436   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706443   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.706461   40358 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0924 19:14:48.706475   40358 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0924 19:14:48.706483   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706491   40358 command_runner.go:130] >       "size": "68420934",
	I0924 19:14:48.706500   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.706507   40358 command_runner.go:130] >         "value": "0"
	I0924 19:14:48.706515   40358 command_runner.go:130] >       },
	I0924 19:14:48.706522   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.706530   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.706536   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.706541   40358 command_runner.go:130] >     },
	I0924 19:14:48.706548   40358 command_runner.go:130] >     {
	I0924 19:14:48.706558   40358 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0924 19:14:48.706567   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.706576   40358 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0924 19:14:48.706585   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706591   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.706615   40358 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0924 19:14:48.706632   40358 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0924 19:14:48.706642   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706652   40358 command_runner.go:130] >       "size": "742080",
	I0924 19:14:48.706661   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.706671   40358 command_runner.go:130] >         "value": "65535"
	I0924 19:14:48.706679   40358 command_runner.go:130] >       },
	I0924 19:14:48.706685   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.706694   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.706703   40358 command_runner.go:130] >       "pinned": true
	I0924 19:14:48.706708   40358 command_runner.go:130] >     }
	I0924 19:14:48.706711   40358 command_runner.go:130] >   ]
	I0924 19:14:48.706714   40358 command_runner.go:130] > }
	I0924 19:14:48.706911   40358 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 19:14:48.706926   40358 cache_images.go:84] Images are preloaded, skipping loading
	I0924 19:14:48.706942   40358 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.31.1 crio true true} ...
	I0924 19:14:48.707069   40358 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-624105 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-624105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:14:48.707146   40358 ssh_runner.go:195] Run: crio config
	I0924 19:14:48.736190   40358 command_runner.go:130] ! time="2024-09-24 19:14:48.717719834Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0924 19:14:48.741422   40358 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0924 19:14:48.752004   40358 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0924 19:14:48.752022   40358 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0924 19:14:48.752028   40358 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0924 19:14:48.752032   40358 command_runner.go:130] > #
	I0924 19:14:48.752042   40358 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0924 19:14:48.752049   40358 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0924 19:14:48.752057   40358 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0924 19:14:48.752066   40358 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0924 19:14:48.752073   40358 command_runner.go:130] > # reload'.
	I0924 19:14:48.752084   40358 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0924 19:14:48.752095   40358 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0924 19:14:48.752107   40358 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0924 19:14:48.752120   40358 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0924 19:14:48.752142   40358 command_runner.go:130] > [crio]
	I0924 19:14:48.752155   40358 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0924 19:14:48.752161   40358 command_runner.go:130] > # containers images, in this directory.
	I0924 19:14:48.752165   40358 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0924 19:14:48.752175   40358 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0924 19:14:48.752182   40358 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0924 19:14:48.752190   40358 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0924 19:14:48.752196   40358 command_runner.go:130] > # imagestore = ""
	I0924 19:14:48.752202   40358 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0924 19:14:48.752210   40358 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0924 19:14:48.752214   40358 command_runner.go:130] > storage_driver = "overlay"
	I0924 19:14:48.752221   40358 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0924 19:14:48.752231   40358 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0924 19:14:48.752237   40358 command_runner.go:130] > storage_option = [
	I0924 19:14:48.752241   40358 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0924 19:14:48.752246   40358 command_runner.go:130] > ]
	I0924 19:14:48.752253   40358 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0924 19:14:48.752261   40358 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0924 19:14:48.752268   40358 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0924 19:14:48.752273   40358 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0924 19:14:48.752281   40358 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0924 19:14:48.752286   40358 command_runner.go:130] > # always happen on a node reboot
	I0924 19:14:48.752291   40358 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0924 19:14:48.752302   40358 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0924 19:14:48.752309   40358 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0924 19:14:48.752317   40358 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0924 19:14:48.752322   40358 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0924 19:14:48.752331   40358 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0924 19:14:48.752339   40358 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0924 19:14:48.752345   40358 command_runner.go:130] > # internal_wipe = true
	I0924 19:14:48.752354   40358 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0924 19:14:48.752361   40358 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0924 19:14:48.752365   40358 command_runner.go:130] > # internal_repair = false
	I0924 19:14:48.752370   40358 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0924 19:14:48.752377   40358 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0924 19:14:48.752384   40358 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0924 19:14:48.752389   40358 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0924 19:14:48.752400   40358 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0924 19:14:48.752405   40358 command_runner.go:130] > [crio.api]
	I0924 19:14:48.752411   40358 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0924 19:14:48.752418   40358 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0924 19:14:48.752423   40358 command_runner.go:130] > # IP address on which the stream server will listen.
	I0924 19:14:48.752429   40358 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0924 19:14:48.752436   40358 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0924 19:14:48.752443   40358 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0924 19:14:48.752447   40358 command_runner.go:130] > # stream_port = "0"
	I0924 19:14:48.752455   40358 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0924 19:14:48.752464   40358 command_runner.go:130] > # stream_enable_tls = false
	I0924 19:14:48.752478   40358 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0924 19:14:48.752488   40358 command_runner.go:130] > # stream_idle_timeout = ""
	I0924 19:14:48.752500   40358 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0924 19:14:48.752512   40358 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0924 19:14:48.752521   40358 command_runner.go:130] > # minutes.
	I0924 19:14:48.752528   40358 command_runner.go:130] > # stream_tls_cert = ""
	I0924 19:14:48.752534   40358 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0924 19:14:48.752544   40358 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0924 19:14:48.752550   40358 command_runner.go:130] > # stream_tls_key = ""
	I0924 19:14:48.752556   40358 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0924 19:14:48.752563   40358 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0924 19:14:48.752577   40358 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0924 19:14:48.752583   40358 command_runner.go:130] > # stream_tls_ca = ""
	I0924 19:14:48.752590   40358 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0924 19:14:48.752597   40358 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0924 19:14:48.752605   40358 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0924 19:14:48.752611   40358 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0924 19:14:48.752617   40358 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0924 19:14:48.752624   40358 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0924 19:14:48.752628   40358 command_runner.go:130] > [crio.runtime]
	I0924 19:14:48.752637   40358 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0924 19:14:48.752645   40358 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0924 19:14:48.752650   40358 command_runner.go:130] > # "nofile=1024:2048"
	I0924 19:14:48.752656   40358 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0924 19:14:48.752662   40358 command_runner.go:130] > # default_ulimits = [
	I0924 19:14:48.752665   40358 command_runner.go:130] > # ]
	I0924 19:14:48.752674   40358 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0924 19:14:48.752678   40358 command_runner.go:130] > # no_pivot = false
	I0924 19:14:48.752687   40358 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0924 19:14:48.752695   40358 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0924 19:14:48.752702   40358 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0924 19:14:48.752707   40358 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0924 19:14:48.752714   40358 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0924 19:14:48.752720   40358 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0924 19:14:48.752726   40358 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0924 19:14:48.752731   40358 command_runner.go:130] > # Cgroup setting for conmon
	I0924 19:14:48.752739   40358 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0924 19:14:48.752746   40358 command_runner.go:130] > conmon_cgroup = "pod"
	I0924 19:14:48.752751   40358 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0924 19:14:48.752758   40358 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0924 19:14:48.752764   40358 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0924 19:14:48.752770   40358 command_runner.go:130] > conmon_env = [
	I0924 19:14:48.752776   40358 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0924 19:14:48.752781   40358 command_runner.go:130] > ]
	I0924 19:14:48.752786   40358 command_runner.go:130] > # Additional environment variables to set for all the
	I0924 19:14:48.752793   40358 command_runner.go:130] > # containers. These are overridden if set in the
	I0924 19:14:48.752799   40358 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0924 19:14:48.752804   40358 command_runner.go:130] > # default_env = [
	I0924 19:14:48.752811   40358 command_runner.go:130] > # ]
	I0924 19:14:48.752819   40358 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0924 19:14:48.752828   40358 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0924 19:14:48.752834   40358 command_runner.go:130] > # selinux = false
	I0924 19:14:48.752840   40358 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0924 19:14:48.752848   40358 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0924 19:14:48.752853   40358 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0924 19:14:48.752859   40358 command_runner.go:130] > # seccomp_profile = ""
	I0924 19:14:48.752865   40358 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0924 19:14:48.752872   40358 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0924 19:14:48.752885   40358 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0924 19:14:48.752891   40358 command_runner.go:130] > # which might increase security.
	I0924 19:14:48.752896   40358 command_runner.go:130] > # This option is currently deprecated,
	I0924 19:14:48.752903   40358 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0924 19:14:48.752910   40358 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0924 19:14:48.752916   40358 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0924 19:14:48.752924   40358 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0924 19:14:48.752934   40358 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0924 19:14:48.752943   40358 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0924 19:14:48.752949   40358 command_runner.go:130] > # This option supports live configuration reload.
	I0924 19:14:48.752954   40358 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0924 19:14:48.752961   40358 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0924 19:14:48.752966   40358 command_runner.go:130] > # the cgroup blockio controller.
	I0924 19:14:48.752972   40358 command_runner.go:130] > # blockio_config_file = ""
	I0924 19:14:48.752979   40358 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0924 19:14:48.752985   40358 command_runner.go:130] > # blockio parameters.
	I0924 19:14:48.752989   40358 command_runner.go:130] > # blockio_reload = false
	I0924 19:14:48.752997   40358 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0924 19:14:48.753002   40358 command_runner.go:130] > # irqbalance daemon.
	I0924 19:14:48.753007   40358 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0924 19:14:48.753015   40358 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0924 19:14:48.753021   40358 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0924 19:14:48.753030   40358 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0924 19:14:48.753038   40358 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0924 19:14:48.753046   40358 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0924 19:14:48.753054   40358 command_runner.go:130] > # This option supports live configuration reload.
	I0924 19:14:48.753057   40358 command_runner.go:130] > # rdt_config_file = ""
	I0924 19:14:48.753062   40358 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0924 19:14:48.753068   40358 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0924 19:14:48.753083   40358 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0924 19:14:48.753089   40358 command_runner.go:130] > # separate_pull_cgroup = ""
	I0924 19:14:48.753095   40358 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0924 19:14:48.753103   40358 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0924 19:14:48.753107   40358 command_runner.go:130] > # will be added.
	I0924 19:14:48.753113   40358 command_runner.go:130] > # default_capabilities = [
	I0924 19:14:48.753117   40358 command_runner.go:130] > # 	"CHOWN",
	I0924 19:14:48.753123   40358 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0924 19:14:48.753127   40358 command_runner.go:130] > # 	"FSETID",
	I0924 19:14:48.753132   40358 command_runner.go:130] > # 	"FOWNER",
	I0924 19:14:48.753136   40358 command_runner.go:130] > # 	"SETGID",
	I0924 19:14:48.753141   40358 command_runner.go:130] > # 	"SETUID",
	I0924 19:14:48.753145   40358 command_runner.go:130] > # 	"SETPCAP",
	I0924 19:14:48.753151   40358 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0924 19:14:48.753155   40358 command_runner.go:130] > # 	"KILL",
	I0924 19:14:48.753160   40358 command_runner.go:130] > # ]
	I0924 19:14:48.753167   40358 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0924 19:14:48.753176   40358 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0924 19:14:48.753185   40358 command_runner.go:130] > # add_inheritable_capabilities = false
	I0924 19:14:48.753193   40358 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0924 19:14:48.753201   40358 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0924 19:14:48.753205   40358 command_runner.go:130] > default_sysctls = [
	I0924 19:14:48.753210   40358 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0924 19:14:48.753215   40358 command_runner.go:130] > ]
	I0924 19:14:48.753220   40358 command_runner.go:130] > # List of devices on the host that a
	I0924 19:14:48.753228   40358 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0924 19:14:48.753232   40358 command_runner.go:130] > # allowed_devices = [
	I0924 19:14:48.753238   40358 command_runner.go:130] > # 	"/dev/fuse",
	I0924 19:14:48.753241   40358 command_runner.go:130] > # ]
	I0924 19:14:48.753248   40358 command_runner.go:130] > # List of additional devices. specified as
	I0924 19:14:48.753255   40358 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0924 19:14:48.753262   40358 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0924 19:14:48.753268   40358 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0924 19:14:48.753274   40358 command_runner.go:130] > # additional_devices = [
	I0924 19:14:48.753277   40358 command_runner.go:130] > # ]
	I0924 19:14:48.753284   40358 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0924 19:14:48.753288   40358 command_runner.go:130] > # cdi_spec_dirs = [
	I0924 19:14:48.753294   40358 command_runner.go:130] > # 	"/etc/cdi",
	I0924 19:14:48.753298   40358 command_runner.go:130] > # 	"/var/run/cdi",
	I0924 19:14:48.753303   40358 command_runner.go:130] > # ]
	I0924 19:14:48.753309   40358 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0924 19:14:48.753317   40358 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0924 19:14:48.753323   40358 command_runner.go:130] > # Defaults to false.
	I0924 19:14:48.753328   40358 command_runner.go:130] > # device_ownership_from_security_context = false
	I0924 19:14:48.753336   40358 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0924 19:14:48.753344   40358 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0924 19:14:48.753348   40358 command_runner.go:130] > # hooks_dir = [
	I0924 19:14:48.753353   40358 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0924 19:14:48.753356   40358 command_runner.go:130] > # ]
	I0924 19:14:48.753362   40358 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0924 19:14:48.753370   40358 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0924 19:14:48.753377   40358 command_runner.go:130] > # its default mounts from the following two files:
	I0924 19:14:48.753381   40358 command_runner.go:130] > #
	I0924 19:14:48.753387   40358 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0924 19:14:48.753395   40358 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0924 19:14:48.753403   40358 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0924 19:14:48.753408   40358 command_runner.go:130] > #
	I0924 19:14:48.753414   40358 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0924 19:14:48.753422   40358 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0924 19:14:48.753428   40358 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0924 19:14:48.753437   40358 command_runner.go:130] > #      only add mounts it finds in this file.
	I0924 19:14:48.753443   40358 command_runner.go:130] > #
	I0924 19:14:48.753447   40358 command_runner.go:130] > # default_mounts_file = ""
	I0924 19:14:48.753454   40358 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0924 19:14:48.753463   40358 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0924 19:14:48.753472   40358 command_runner.go:130] > pids_limit = 1024
	I0924 19:14:48.753484   40358 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0924 19:14:48.753495   40358 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0924 19:14:48.753507   40358 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0924 19:14:48.753522   40358 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0924 19:14:48.753531   40358 command_runner.go:130] > # log_size_max = -1
	I0924 19:14:48.753542   40358 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0924 19:14:48.753549   40358 command_runner.go:130] > # log_to_journald = false
	I0924 19:14:48.753555   40358 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0924 19:14:48.753560   40358 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0924 19:14:48.753567   40358 command_runner.go:130] > # Path to directory for container attach sockets.
	I0924 19:14:48.753571   40358 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0924 19:14:48.753579   40358 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0924 19:14:48.753583   40358 command_runner.go:130] > # bind_mount_prefix = ""
	I0924 19:14:48.753588   40358 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0924 19:14:48.753594   40358 command_runner.go:130] > # read_only = false
	I0924 19:14:48.753600   40358 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0924 19:14:48.753608   40358 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0924 19:14:48.753614   40358 command_runner.go:130] > # live configuration reload.
	I0924 19:14:48.753619   40358 command_runner.go:130] > # log_level = "info"
	I0924 19:14:48.753626   40358 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0924 19:14:48.753633   40358 command_runner.go:130] > # This option supports live configuration reload.
	I0924 19:14:48.753640   40358 command_runner.go:130] > # log_filter = ""
	I0924 19:14:48.753646   40358 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0924 19:14:48.753654   40358 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0924 19:14:48.753660   40358 command_runner.go:130] > # separated by comma.
	I0924 19:14:48.753667   40358 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0924 19:14:48.753673   40358 command_runner.go:130] > # uid_mappings = ""
	I0924 19:14:48.753680   40358 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0924 19:14:48.753688   40358 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0924 19:14:48.753694   40358 command_runner.go:130] > # separated by comma.
	I0924 19:14:48.753702   40358 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0924 19:14:48.753711   40358 command_runner.go:130] > # gid_mappings = ""
	I0924 19:14:48.753720   40358 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0924 19:14:48.753727   40358 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0924 19:14:48.753736   40358 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0924 19:14:48.753743   40358 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0924 19:14:48.753750   40358 command_runner.go:130] > # minimum_mappable_uid = -1
	I0924 19:14:48.753755   40358 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0924 19:14:48.753765   40358 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0924 19:14:48.753774   40358 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0924 19:14:48.753783   40358 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0924 19:14:48.753787   40358 command_runner.go:130] > # minimum_mappable_gid = -1
	I0924 19:14:48.753795   40358 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0924 19:14:48.753803   40358 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0924 19:14:48.753812   40358 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0924 19:14:48.753818   40358 command_runner.go:130] > # ctr_stop_timeout = 30
	I0924 19:14:48.753824   40358 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0924 19:14:48.753830   40358 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0924 19:14:48.753837   40358 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0924 19:14:48.753844   40358 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0924 19:14:48.753848   40358 command_runner.go:130] > drop_infra_ctr = false
	I0924 19:14:48.753856   40358 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0924 19:14:48.753863   40358 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0924 19:14:48.753870   40358 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0924 19:14:48.753876   40358 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0924 19:14:48.753887   40358 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0924 19:14:48.753894   40358 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0924 19:14:48.753902   40358 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0924 19:14:48.753907   40358 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0924 19:14:48.753913   40358 command_runner.go:130] > # shared_cpuset = ""
	I0924 19:14:48.753920   40358 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0924 19:14:48.753928   40358 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0924 19:14:48.753932   40358 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0924 19:14:48.753941   40358 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0924 19:14:48.753947   40358 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0924 19:14:48.753953   40358 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0924 19:14:48.753963   40358 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0924 19:14:48.753970   40358 command_runner.go:130] > # enable_criu_support = false
	I0924 19:14:48.753975   40358 command_runner.go:130] > # Enable/disable the generation of the container,
	I0924 19:14:48.753983   40358 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0924 19:14:48.753989   40358 command_runner.go:130] > # enable_pod_events = false
	I0924 19:14:48.753997   40358 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0924 19:14:48.754005   40358 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0924 19:14:48.754012   40358 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0924 19:14:48.754016   40358 command_runner.go:130] > # default_runtime = "runc"
	I0924 19:14:48.754022   40358 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0924 19:14:48.754031   40358 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0924 19:14:48.754042   40358 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0924 19:14:48.754049   40358 command_runner.go:130] > # creation as a file is not desired either.
	I0924 19:14:48.754057   40358 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0924 19:14:48.754064   40358 command_runner.go:130] > # the hostname is being managed dynamically.
	I0924 19:14:48.754068   40358 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0924 19:14:48.754074   40358 command_runner.go:130] > # ]
	I0924 19:14:48.754081   40358 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0924 19:14:48.754090   40358 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0924 19:14:48.754095   40358 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0924 19:14:48.754103   40358 command_runner.go:130] > # Each entry in the table should follow the format:
	I0924 19:14:48.754108   40358 command_runner.go:130] > #
	I0924 19:14:48.754113   40358 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0924 19:14:48.754120   40358 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0924 19:14:48.754139   40358 command_runner.go:130] > # runtime_type = "oci"
	I0924 19:14:48.754146   40358 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0924 19:14:48.754151   40358 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0924 19:14:48.754158   40358 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0924 19:14:48.754162   40358 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0924 19:14:48.754169   40358 command_runner.go:130] > # monitor_env = []
	I0924 19:14:48.754173   40358 command_runner.go:130] > # privileged_without_host_devices = false
	I0924 19:14:48.754180   40358 command_runner.go:130] > # allowed_annotations = []
	I0924 19:14:48.754185   40358 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0924 19:14:48.754190   40358 command_runner.go:130] > # Where:
	I0924 19:14:48.754196   40358 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0924 19:14:48.754203   40358 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0924 19:14:48.754211   40358 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0924 19:14:48.754217   40358 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0924 19:14:48.754225   40358 command_runner.go:130] > #   in $PATH.
	I0924 19:14:48.754232   40358 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0924 19:14:48.754237   40358 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0924 19:14:48.754245   40358 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0924 19:14:48.754249   40358 command_runner.go:130] > #   state.
	I0924 19:14:48.754255   40358 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0924 19:14:48.754262   40358 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0924 19:14:48.754268   40358 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0924 19:14:48.754276   40358 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0924 19:14:48.754281   40358 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0924 19:14:48.754289   40358 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0924 19:14:48.754295   40358 command_runner.go:130] > #   The currently recognized values are:
	I0924 19:14:48.754303   40358 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0924 19:14:48.754312   40358 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0924 19:14:48.754320   40358 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0924 19:14:48.754328   40358 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0924 19:14:48.754335   40358 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0924 19:14:48.754343   40358 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0924 19:14:48.754352   40358 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0924 19:14:48.754358   40358 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0924 19:14:48.754366   40358 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0924 19:14:48.754374   40358 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0924 19:14:48.754380   40358 command_runner.go:130] > #   deprecated option "conmon".
	I0924 19:14:48.754389   40358 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0924 19:14:48.754394   40358 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0924 19:14:48.754402   40358 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0924 19:14:48.754408   40358 command_runner.go:130] > #   should be moved to the container's cgroup
	I0924 19:14:48.754415   40358 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0924 19:14:48.754421   40358 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0924 19:14:48.754427   40358 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0924 19:14:48.754434   40358 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0924 19:14:48.754437   40358 command_runner.go:130] > #
	I0924 19:14:48.754442   40358 command_runner.go:130] > # Using the seccomp notifier feature:
	I0924 19:14:48.754449   40358 command_runner.go:130] > #
	I0924 19:14:48.754455   40358 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0924 19:14:48.754467   40358 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0924 19:14:48.754475   40358 command_runner.go:130] > #
	I0924 19:14:48.754483   40358 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0924 19:14:48.754495   40358 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0924 19:14:48.754502   40358 command_runner.go:130] > #
	I0924 19:14:48.754511   40358 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0924 19:14:48.754520   40358 command_runner.go:130] > # feature.
	I0924 19:14:48.754527   40358 command_runner.go:130] > #
	I0924 19:14:48.754533   40358 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0924 19:14:48.754540   40358 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0924 19:14:48.754548   40358 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0924 19:14:48.754555   40358 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0924 19:14:48.754561   40358 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0924 19:14:48.754564   40358 command_runner.go:130] > #
	I0924 19:14:48.754570   40358 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0924 19:14:48.754576   40358 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0924 19:14:48.754582   40358 command_runner.go:130] > #
	I0924 19:14:48.754587   40358 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0924 19:14:48.754594   40358 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0924 19:14:48.754597   40358 command_runner.go:130] > #
	I0924 19:14:48.754604   40358 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0924 19:14:48.754612   40358 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0924 19:14:48.754618   40358 command_runner.go:130] > # limitation.
	I0924 19:14:48.754623   40358 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0924 19:14:48.754631   40358 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0924 19:14:48.754637   40358 command_runner.go:130] > runtime_type = "oci"
	I0924 19:14:48.754642   40358 command_runner.go:130] > runtime_root = "/run/runc"
	I0924 19:14:48.754648   40358 command_runner.go:130] > runtime_config_path = ""
	I0924 19:14:48.754652   40358 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0924 19:14:48.754657   40358 command_runner.go:130] > monitor_cgroup = "pod"
	I0924 19:14:48.754663   40358 command_runner.go:130] > monitor_exec_cgroup = ""
	I0924 19:14:48.754667   40358 command_runner.go:130] > monitor_env = [
	I0924 19:14:48.754675   40358 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0924 19:14:48.754677   40358 command_runner.go:130] > ]
	I0924 19:14:48.754682   40358 command_runner.go:130] > privileged_without_host_devices = false
	I0924 19:14:48.754690   40358 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0924 19:14:48.754698   40358 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0924 19:14:48.754704   40358 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0924 19:14:48.754713   40358 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0924 19:14:48.754725   40358 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0924 19:14:48.754734   40358 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0924 19:14:48.754742   40358 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0924 19:14:48.754751   40358 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0924 19:14:48.754759   40358 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0924 19:14:48.754766   40358 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0924 19:14:48.754772   40358 command_runner.go:130] > # Example:
	I0924 19:14:48.754777   40358 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0924 19:14:48.754784   40358 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0924 19:14:48.754789   40358 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0924 19:14:48.754795   40358 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0924 19:14:48.754799   40358 command_runner.go:130] > # cpuset = 0
	I0924 19:14:48.754804   40358 command_runner.go:130] > # cpushares = "0-1"
	I0924 19:14:48.754807   40358 command_runner.go:130] > # Where:
	I0924 19:14:48.754814   40358 command_runner.go:130] > # The workload name is workload-type.
	I0924 19:14:48.754821   40358 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0924 19:14:48.754843   40358 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0924 19:14:48.754855   40358 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0924 19:14:48.754866   40358 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0924 19:14:48.754873   40358 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0924 19:14:48.754881   40358 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0924 19:14:48.754890   40358 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0924 19:14:48.754897   40358 command_runner.go:130] > # Default value is set to true
	I0924 19:14:48.754901   40358 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0924 19:14:48.754909   40358 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0924 19:14:48.754913   40358 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0924 19:14:48.754920   40358 command_runner.go:130] > # Default value is set to 'false'
	I0924 19:14:48.754925   40358 command_runner.go:130] > # disable_hostport_mapping = false
	I0924 19:14:48.754931   40358 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0924 19:14:48.754936   40358 command_runner.go:130] > #
	I0924 19:14:48.754942   40358 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0924 19:14:48.754947   40358 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0924 19:14:48.754953   40358 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0924 19:14:48.754958   40358 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0924 19:14:48.754966   40358 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0924 19:14:48.754970   40358 command_runner.go:130] > [crio.image]
	I0924 19:14:48.754975   40358 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0924 19:14:48.754979   40358 command_runner.go:130] > # default_transport = "docker://"
	I0924 19:14:48.754984   40358 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0924 19:14:48.754990   40358 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0924 19:14:48.754993   40358 command_runner.go:130] > # global_auth_file = ""
	I0924 19:14:48.754998   40358 command_runner.go:130] > # The image used to instantiate infra containers.
	I0924 19:14:48.755003   40358 command_runner.go:130] > # This option supports live configuration reload.
	I0924 19:14:48.755007   40358 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0924 19:14:48.755013   40358 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0924 19:14:48.755018   40358 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0924 19:14:48.755023   40358 command_runner.go:130] > # This option supports live configuration reload.
	I0924 19:14:48.755028   40358 command_runner.go:130] > # pause_image_auth_file = ""
	I0924 19:14:48.755033   40358 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0924 19:14:48.755038   40358 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0924 19:14:48.755044   40358 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0924 19:14:48.755049   40358 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0924 19:14:48.755053   40358 command_runner.go:130] > # pause_command = "/pause"
	I0924 19:14:48.755058   40358 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0924 19:14:48.755063   40358 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0924 19:14:48.755069   40358 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0924 19:14:48.755075   40358 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0924 19:14:48.755080   40358 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0924 19:14:48.755086   40358 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0924 19:14:48.755089   40358 command_runner.go:130] > # pinned_images = [
	I0924 19:14:48.755092   40358 command_runner.go:130] > # ]
	I0924 19:14:48.755098   40358 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0924 19:14:48.755103   40358 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0924 19:14:48.755109   40358 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0924 19:14:48.755114   40358 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0924 19:14:48.755119   40358 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0924 19:14:48.755124   40358 command_runner.go:130] > # signature_policy = ""
	I0924 19:14:48.755129   40358 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0924 19:14:48.755137   40358 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0924 19:14:48.755145   40358 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0924 19:14:48.755153   40358 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0924 19:14:48.755160   40358 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0924 19:14:48.755165   40358 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0924 19:14:48.755173   40358 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0924 19:14:48.755182   40358 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0924 19:14:48.755187   40358 command_runner.go:130] > # changing them here.
	I0924 19:14:48.755191   40358 command_runner.go:130] > # insecure_registries = [
	I0924 19:14:48.755196   40358 command_runner.go:130] > # ]
	I0924 19:14:48.755202   40358 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0924 19:14:48.755209   40358 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0924 19:14:48.755214   40358 command_runner.go:130] > # image_volumes = "mkdir"
	I0924 19:14:48.755221   40358 command_runner.go:130] > # Temporary directory to use for storing big files
	I0924 19:14:48.755226   40358 command_runner.go:130] > # big_files_temporary_dir = ""
	I0924 19:14:48.755233   40358 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0924 19:14:48.755237   40358 command_runner.go:130] > # CNI plugins.
	I0924 19:14:48.755241   40358 command_runner.go:130] > [crio.network]
	I0924 19:14:48.755247   40358 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0924 19:14:48.755254   40358 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0924 19:14:48.755258   40358 command_runner.go:130] > # cni_default_network = ""
	I0924 19:14:48.755265   40358 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0924 19:14:48.755270   40358 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0924 19:14:48.755277   40358 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0924 19:14:48.755281   40358 command_runner.go:130] > # plugin_dirs = [
	I0924 19:14:48.755287   40358 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0924 19:14:48.755290   40358 command_runner.go:130] > # ]
	I0924 19:14:48.755296   40358 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0924 19:14:48.755302   40358 command_runner.go:130] > [crio.metrics]
	I0924 19:14:48.755307   40358 command_runner.go:130] > # Globally enable or disable metrics support.
	I0924 19:14:48.755313   40358 command_runner.go:130] > enable_metrics = true
	I0924 19:14:48.755318   40358 command_runner.go:130] > # Specify enabled metrics collectors.
	I0924 19:14:48.755324   40358 command_runner.go:130] > # Per default all metrics are enabled.
	I0924 19:14:48.755330   40358 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0924 19:14:48.755338   40358 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0924 19:14:48.755347   40358 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0924 19:14:48.755353   40358 command_runner.go:130] > # metrics_collectors = [
	I0924 19:14:48.755357   40358 command_runner.go:130] > # 	"operations",
	I0924 19:14:48.755363   40358 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0924 19:14:48.755368   40358 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0924 19:14:48.755374   40358 command_runner.go:130] > # 	"operations_errors",
	I0924 19:14:48.755378   40358 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0924 19:14:48.755384   40358 command_runner.go:130] > # 	"image_pulls_by_name",
	I0924 19:14:48.755388   40358 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0924 19:14:48.755397   40358 command_runner.go:130] > # 	"image_pulls_failures",
	I0924 19:14:48.755403   40358 command_runner.go:130] > # 	"image_pulls_successes",
	I0924 19:14:48.755407   40358 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0924 19:14:48.755414   40358 command_runner.go:130] > # 	"image_layer_reuse",
	I0924 19:14:48.755418   40358 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0924 19:14:48.755424   40358 command_runner.go:130] > # 	"containers_oom_total",
	I0924 19:14:48.755428   40358 command_runner.go:130] > # 	"containers_oom",
	I0924 19:14:48.755434   40358 command_runner.go:130] > # 	"processes_defunct",
	I0924 19:14:48.755438   40358 command_runner.go:130] > # 	"operations_total",
	I0924 19:14:48.755444   40358 command_runner.go:130] > # 	"operations_latency_seconds",
	I0924 19:14:48.755448   40358 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0924 19:14:48.755453   40358 command_runner.go:130] > # 	"operations_errors_total",
	I0924 19:14:48.755461   40358 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0924 19:14:48.755471   40358 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0924 19:14:48.755480   40358 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0924 19:14:48.755490   40358 command_runner.go:130] > # 	"image_pulls_success_total",
	I0924 19:14:48.755499   40358 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0924 19:14:48.755510   40358 command_runner.go:130] > # 	"containers_oom_count_total",
	I0924 19:14:48.755519   40358 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0924 19:14:48.755529   40358 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0924 19:14:48.755536   40358 command_runner.go:130] > # ]
	I0924 19:14:48.755541   40358 command_runner.go:130] > # The port on which the metrics server will listen.
	I0924 19:14:48.755547   40358 command_runner.go:130] > # metrics_port = 9090
	I0924 19:14:48.755553   40358 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0924 19:14:48.755557   40358 command_runner.go:130] > # metrics_socket = ""
	I0924 19:14:48.755564   40358 command_runner.go:130] > # The certificate for the secure metrics server.
	I0924 19:14:48.755569   40358 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0924 19:14:48.755577   40358 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0924 19:14:48.755584   40358 command_runner.go:130] > # certificate on any modification event.
	I0924 19:14:48.755588   40358 command_runner.go:130] > # metrics_cert = ""
	I0924 19:14:48.755594   40358 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0924 19:14:48.755599   40358 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0924 19:14:48.755604   40358 command_runner.go:130] > # metrics_key = ""
	I0924 19:14:48.755609   40358 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0924 19:14:48.755616   40358 command_runner.go:130] > [crio.tracing]
	I0924 19:14:48.755622   40358 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0924 19:14:48.755628   40358 command_runner.go:130] > # enable_tracing = false
	I0924 19:14:48.755634   40358 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0924 19:14:48.755641   40358 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0924 19:14:48.755648   40358 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0924 19:14:48.755655   40358 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0924 19:14:48.755659   40358 command_runner.go:130] > # CRI-O NRI configuration.
	I0924 19:14:48.755665   40358 command_runner.go:130] > [crio.nri]
	I0924 19:14:48.755670   40358 command_runner.go:130] > # Globally enable or disable NRI.
	I0924 19:14:48.755676   40358 command_runner.go:130] > # enable_nri = false
	I0924 19:14:48.755685   40358 command_runner.go:130] > # NRI socket to listen on.
	I0924 19:14:48.755692   40358 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0924 19:14:48.755697   40358 command_runner.go:130] > # NRI plugin directory to use.
	I0924 19:14:48.755703   40358 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0924 19:14:48.755708   40358 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0924 19:14:48.755715   40358 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0924 19:14:48.755720   40358 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0924 19:14:48.755726   40358 command_runner.go:130] > # nri_disable_connections = false
	I0924 19:14:48.755731   40358 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0924 19:14:48.755738   40358 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0924 19:14:48.755743   40358 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0924 19:14:48.755749   40358 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0924 19:14:48.755756   40358 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0924 19:14:48.755761   40358 command_runner.go:130] > [crio.stats]
	I0924 19:14:48.755767   40358 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0924 19:14:48.755774   40358 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0924 19:14:48.755778   40358 command_runner.go:130] > # stats_collection_period = 0
	I0924 19:14:48.755852   40358 cni.go:84] Creating CNI manager for ""
	I0924 19:14:48.755863   40358 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0924 19:14:48.755874   40358 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:14:48.755898   40358 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-624105 NodeName:multinode-624105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:14:48.756024   40358 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-624105"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:14:48.756086   40358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:14:48.765851   40358 command_runner.go:130] > kubeadm
	I0924 19:14:48.765865   40358 command_runner.go:130] > kubectl
	I0924 19:14:48.765870   40358 command_runner.go:130] > kubelet
	I0924 19:14:48.765890   40358 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:14:48.765954   40358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:14:48.774993   40358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0924 19:14:48.790594   40358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:14:48.806557   40358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0924 19:14:48.822268   40358 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I0924 19:14:48.826009   40358 command_runner.go:130] > 192.168.39.206	control-plane.minikube.internal
	I0924 19:14:48.826070   40358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:14:48.956770   40358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:14:48.971072   40358 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105 for IP: 192.168.39.206
	I0924 19:14:48.971099   40358 certs.go:194] generating shared ca certs ...
	I0924 19:14:48.971120   40358 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:14:48.971312   40358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:14:48.971376   40358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:14:48.971392   40358 certs.go:256] generating profile certs ...
	I0924 19:14:48.971497   40358 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/client.key
	I0924 19:14:48.971582   40358 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/apiserver.key.11e7b858
	I0924 19:14:48.971637   40358 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/proxy-client.key
	I0924 19:14:48.971655   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 19:14:48.971678   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 19:14:48.971694   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 19:14:48.971712   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 19:14:48.971732   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 19:14:48.971751   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 19:14:48.971767   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 19:14:48.971781   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 19:14:48.971920   40358 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:14:48.971996   40358 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:14:48.972010   40358 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:14:48.972044   40358 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:14:48.972082   40358 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:14:48.972113   40358 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:14:48.972165   40358 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:14:48.972206   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem -> /usr/share/ca-certificates/10949.pem
	I0924 19:14:48.972225   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /usr/share/ca-certificates/109492.pem
	I0924 19:14:48.972240   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:14:48.973058   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:14:48.996455   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:14:49.019646   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:14:49.041856   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:14:49.064468   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 19:14:49.086036   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 19:14:49.108625   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:14:49.132800   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:14:49.155107   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:14:49.176420   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:14:49.198650   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:14:49.221496   40358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:14:49.237212   40358 ssh_runner.go:195] Run: openssl version
	I0924 19:14:49.242192   40358 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0924 19:14:49.242332   40358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:14:49.253966   40358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:14:49.258279   40358 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:14:49.258334   40358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:14:49.258397   40358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:14:49.263832   40358 command_runner.go:130] > 3ec20f2e
	I0924 19:14:49.263890   40358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:14:49.273932   40358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:14:49.285289   40358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:14:49.289491   40358 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:14:49.289622   40358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:14:49.289675   40358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:14:49.295185   40358 command_runner.go:130] > b5213941
	I0924 19:14:49.295246   40358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:14:49.305849   40358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:14:49.318024   40358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:14:49.322171   40358 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:14:49.322412   40358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:14:49.322461   40358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:14:49.327863   40358 command_runner.go:130] > 51391683
	I0924 19:14:49.328062   40358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:14:49.338229   40358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:14:49.342367   40358 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:14:49.342388   40358 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0924 19:14:49.342397   40358 command_runner.go:130] > Device: 253,1	Inode: 7337000     Links: 1
	I0924 19:14:49.342408   40358 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0924 19:14:49.342415   40358 command_runner.go:130] > Access: 2024-09-24 19:07:49.344338370 +0000
	I0924 19:14:49.342419   40358 command_runner.go:130] > Modify: 2024-09-24 19:07:49.344338370 +0000
	I0924 19:14:49.342424   40358 command_runner.go:130] > Change: 2024-09-24 19:07:49.344338370 +0000
	I0924 19:14:49.342429   40358 command_runner.go:130] >  Birth: 2024-09-24 19:07:49.344338370 +0000
	I0924 19:14:49.342480   40358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:14:49.347737   40358 command_runner.go:130] > Certificate will not expire
	I0924 19:14:49.347920   40358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:14:49.352961   40358 command_runner.go:130] > Certificate will not expire
	I0924 19:14:49.353129   40358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:14:49.358358   40358 command_runner.go:130] > Certificate will not expire
	I0924 19:14:49.358403   40358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:14:49.363474   40358 command_runner.go:130] > Certificate will not expire
	I0924 19:14:49.363745   40358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:14:49.368873   40358 command_runner.go:130] > Certificate will not expire
	I0924 19:14:49.369067   40358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:14:49.374451   40358 command_runner.go:130] > Certificate will not expire
	I0924 19:14:49.374512   40358 kubeadm.go:392] StartCluster: {Name:multinode-624105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-624105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.64 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:14:49.374692   40358 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:14:49.374737   40358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:14:49.418288   40358 command_runner.go:130] > c23a3cbd5cfd8c07faf3640dee8757614f355cfe8dc68c0f7ea0950505558571
	I0924 19:14:49.418319   40358 command_runner.go:130] > 779b0041b60cd698d1045f8ae870776842bb36ef5553884956cf806ea012d800
	I0924 19:14:49.418329   40358 command_runner.go:130] > 5c10e265d8db30ac50ab51c5979870ea882d6606e81d05528a47757723a07eb4
	I0924 19:14:49.418340   40358 command_runner.go:130] > 1df74ab6f5ff072ac4f7f36212d6685e8eb667ae625599b24aaca12983220e9f
	I0924 19:14:49.418349   40358 command_runner.go:130] > 214cabb794f93285db3098b6d687d6565105552e0ae66157f25b5bbdcf7e3737
	I0924 19:14:49.418358   40358 command_runner.go:130] > ba04f08547dac200c40e0bc1055251df174fec8d493f6ce21af90b62ecd0f189
	I0924 19:14:49.418370   40358 command_runner.go:130] > cd329fa120f189f45a33adae45f15d67e8eb4e0773abd4026327baf270b2fd33
	I0924 19:14:49.418384   40358 command_runner.go:130] > ca9ffec06dd06e522e3d734ac79bf6ef3dbc4dbf6c87fedf3635d62b17b441e9
	I0924 19:14:49.418411   40358 cri.go:89] found id: "c23a3cbd5cfd8c07faf3640dee8757614f355cfe8dc68c0f7ea0950505558571"
	I0924 19:14:49.418421   40358 cri.go:89] found id: "779b0041b60cd698d1045f8ae870776842bb36ef5553884956cf806ea012d800"
	I0924 19:14:49.418429   40358 cri.go:89] found id: "5c10e265d8db30ac50ab51c5979870ea882d6606e81d05528a47757723a07eb4"
	I0924 19:14:49.418434   40358 cri.go:89] found id: "1df74ab6f5ff072ac4f7f36212d6685e8eb667ae625599b24aaca12983220e9f"
	I0924 19:14:49.418440   40358 cri.go:89] found id: "214cabb794f93285db3098b6d687d6565105552e0ae66157f25b5bbdcf7e3737"
	I0924 19:14:49.418445   40358 cri.go:89] found id: "ba04f08547dac200c40e0bc1055251df174fec8d493f6ce21af90b62ecd0f189"
	I0924 19:14:49.418452   40358 cri.go:89] found id: "cd329fa120f189f45a33adae45f15d67e8eb4e0773abd4026327baf270b2fd33"
	I0924 19:14:49.418457   40358 cri.go:89] found id: "ca9ffec06dd06e522e3d734ac79bf6ef3dbc4dbf6c87fedf3635d62b17b441e9"
	I0924 19:14:49.418462   40358 cri.go:89] found id: ""
	I0924 19:14:49.418517   40358 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.412275010Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205395412252418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1c6099d-3974-4056-8475-7181d74b9032 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.412744640Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa036bed-9a4c-4c88-8b62-4a731694c21b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.412796283Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa036bed-9a4c-4c88-8b62-4a731694c21b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.413133052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:debeb8627dc23b7ca74a4c6264401f9f6051c646bb86f8820ddff990470ce7aa,PodSandboxId:d411e6764001804f39246031c5e9769a6069a12cb0fae9241ea76823147c39fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727205329423555433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b22dm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a640a461-f615-435d-9663-c7530e95b0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4e133113611389db3812d520537ee26cebdd0164468a296d13b4be5b35b08,PodSandboxId:88ce5911c653068f50d29c6f8e4d5117b0e629a08150dadb342ecbdfc1eac1ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727205295910154114,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5hztc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 059d1be7-4667-4517-82d5-e3979afead26,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba242595624ecee29081d99f5217488673a44d80057bad90e1a3c9d78809cb71,PodSandboxId:6f80a6fc6a01c2016be2839c5d810f30bd447cd723b4411ab6acf89eebc14a08,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727205295861414182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7bx4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e178-665b-4c5a-9420-8823b9783d98,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34dc96b8e02fa5c03c146b714e6083ba36d74fc9fc054e98eab2394a6e66cc6d,PodSandboxId:7c938e3431dac55dd7abbaf8f4e0b9c2ae99165f7946fa540d619a89ee3fc836,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727205295693893583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc00e955-ea40-4af1-8f79-6edffd374dec,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12f0d3f81d7d524523e95674487533c04331b81e01204a9df8ec84e4af7db9e8,PodSandboxId:c60043af4455b5799d444f464e442c63eef454b7180d6ee04ea4f4dc729d102b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727205295691024519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4sr25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f923c81-b7e3-49e7-bdcc-55c9bb05c558,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8646faab03b02f4a67d2a6dc643b844e5704cb9bde2c74e7744d30a92f062bd,PodSandboxId:1029ee7cd042f33247d0ac45159b80168b7a9c6beeb043cfd55a33cdabd368c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727205291862320434,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95236b1da7ce6dbd963d4e3c6daa9ec,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297f0f1f9170a8f456457d2938156bd59a07ef767f1a5bacc66d775a256d8fb4,PodSandboxId:2eba9b87ef6211bbd9ff281f75364f797016233e1ba4c28c1e055b12593fc3f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727205291800241393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002b0980d8e9353de2c0ce443f9caf6a,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e4ba44e34277911cc6a8014b0bd52f2d3688ead57d591eab0093d823533ef1,PodSandboxId:7b5f19cd9cde3b5a07fb5ee5c74fcf51f9199a1bed3aa626e99f239536c99352,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727205291815589829,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50b5817fd08e77fa481521ea07c2217,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0310cb335531e8803bdc332f5ede315f64cc6cf92f2caff23f038eca744d3bdc,PodSandboxId:87d3d012af8d41b5643c63c49b49f94f1756fbd09e86972bc90acca6b414a9e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727205291798161388,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f61830de29579fef04a3c6b4c1d8b6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702eb68e64dd378a357b7b5eb48c13fbf285b9a0488562875e5369d2cc2e684d,PodSandboxId:49e25cda6a219ddf5332ce53b0a67b3e0df90c36ad6b18fe7bee0b9a13ef2ac5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727204976052370520,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b22dm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a640a461-f615-435d-9663-c7530e95b0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23a3cbd5cfd8c07faf3640dee8757614f355cfe8dc68c0f7ea0950505558571,PodSandboxId:776f5812b09ee792d48ca2e656f195f066f4ccd19cd576e7e5750dfd522e7b6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727204925122691093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7bx4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e178-665b-4c5a-9420-8823b9783d98,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779b0041b60cd698d1045f8ae870776842bb36ef5553884956cf806ea012d800,PodSandboxId:1d20bab8a8dae2063259fed1041135dcf4d258a8e572f63fc8acbaf54f842881,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727204925073837932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: cc00e955-ea40-4af1-8f79-6edffd374dec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1df74ab6f5ff072ac4f7f36212d6685e8eb667ae625599b24aaca12983220e9f,PodSandboxId:e81fc3310529b35cdf216007300d33b72d2bc983a1a13b0bda131398ca529a26,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727204883579640736,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5hztc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 059d1be7-4667-4517-82d5-e3979afead26,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c10e265d8db30ac50ab51c5979870ea882d6606e81d05528a47757723a07eb4,PodSandboxId:667dfa7f0b277a67bc65b35fe1c83dfc39ad019223f3b2011d4fd784becbf57f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727204883580089194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4sr25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f923c81-b7e3-49e7-bdcc
-55c9bb05c558,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214cabb794f93285db3098b6d687d6565105552e0ae66157f25b5bbdcf7e3737,PodSandboxId:701a1c92b5e7dc76e5147b108a7ee1aa501afd67549d4f9bb253210622cf5522,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727204873108027780,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50b581
7fd08e77fa481521ea07c2217,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba04f08547dac200c40e0bc1055251df174fec8d493f6ce21af90b62ecd0f189,PodSandboxId:16b7bc1ab28e4e7e840b9e525c02ab7a1964a4c1a9160f624355a33b5d594c9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727204873096083716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002b0980d8e9353de2c0ce
443f9caf6a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd329fa120f189f45a33adae45f15d67e8eb4e0773abd4026327baf270b2fd33,PodSandboxId:b388e11fd6f335b903b9201aef85ab9cf5d92dc6262f216aa449190cd18f8b49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727204873086507903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95236b1da7ce6dbd963d4e3c6daa9ec,},An
notations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ffec06dd06e522e3d734ac79bf6ef3dbc4dbf6c87fedf3635d62b17b441e9,PodSandboxId:82d82948f36c9b8ccd3efeb23862a0f339fa5b8b66d0342d7e1b6896e8db9886,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727204872936618512,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f61830de29579fef04a3c6b4c1d8b6,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa036bed-9a4c-4c88-8b62-4a731694c21b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.448804178Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb2ae826-a332-420e-b007-551dd570567b name=/runtime.v1.RuntimeService/Version
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.448875288Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb2ae826-a332-420e-b007-551dd570567b name=/runtime.v1.RuntimeService/Version
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.449933567Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=802a5a6e-8c92-4c5b-ae41-22784633072a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.450316702Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205395450295751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=802a5a6e-8c92-4c5b-ae41-22784633072a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.451682987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eec22406-b981-4b05-8aca-b7f22898701b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.451743467Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eec22406-b981-4b05-8aca-b7f22898701b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.452219340Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:debeb8627dc23b7ca74a4c6264401f9f6051c646bb86f8820ddff990470ce7aa,PodSandboxId:d411e6764001804f39246031c5e9769a6069a12cb0fae9241ea76823147c39fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727205329423555433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b22dm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a640a461-f615-435d-9663-c7530e95b0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4e133113611389db3812d520537ee26cebdd0164468a296d13b4be5b35b08,PodSandboxId:88ce5911c653068f50d29c6f8e4d5117b0e629a08150dadb342ecbdfc1eac1ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727205295910154114,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5hztc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 059d1be7-4667-4517-82d5-e3979afead26,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba242595624ecee29081d99f5217488673a44d80057bad90e1a3c9d78809cb71,PodSandboxId:6f80a6fc6a01c2016be2839c5d810f30bd447cd723b4411ab6acf89eebc14a08,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727205295861414182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7bx4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e178-665b-4c5a-9420-8823b9783d98,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34dc96b8e02fa5c03c146b714e6083ba36d74fc9fc054e98eab2394a6e66cc6d,PodSandboxId:7c938e3431dac55dd7abbaf8f4e0b9c2ae99165f7946fa540d619a89ee3fc836,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727205295693893583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc00e955-ea40-4af1-8f79-6edffd374dec,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12f0d3f81d7d524523e95674487533c04331b81e01204a9df8ec84e4af7db9e8,PodSandboxId:c60043af4455b5799d444f464e442c63eef454b7180d6ee04ea4f4dc729d102b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727205295691024519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4sr25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f923c81-b7e3-49e7-bdcc-55c9bb05c558,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8646faab03b02f4a67d2a6dc643b844e5704cb9bde2c74e7744d30a92f062bd,PodSandboxId:1029ee7cd042f33247d0ac45159b80168b7a9c6beeb043cfd55a33cdabd368c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727205291862320434,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95236b1da7ce6dbd963d4e3c6daa9ec,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297f0f1f9170a8f456457d2938156bd59a07ef767f1a5bacc66d775a256d8fb4,PodSandboxId:2eba9b87ef6211bbd9ff281f75364f797016233e1ba4c28c1e055b12593fc3f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727205291800241393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002b0980d8e9353de2c0ce443f9caf6a,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e4ba44e34277911cc6a8014b0bd52f2d3688ead57d591eab0093d823533ef1,PodSandboxId:7b5f19cd9cde3b5a07fb5ee5c74fcf51f9199a1bed3aa626e99f239536c99352,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727205291815589829,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50b5817fd08e77fa481521ea07c2217,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0310cb335531e8803bdc332f5ede315f64cc6cf92f2caff23f038eca744d3bdc,PodSandboxId:87d3d012af8d41b5643c63c49b49f94f1756fbd09e86972bc90acca6b414a9e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727205291798161388,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f61830de29579fef04a3c6b4c1d8b6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702eb68e64dd378a357b7b5eb48c13fbf285b9a0488562875e5369d2cc2e684d,PodSandboxId:49e25cda6a219ddf5332ce53b0a67b3e0df90c36ad6b18fe7bee0b9a13ef2ac5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727204976052370520,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b22dm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a640a461-f615-435d-9663-c7530e95b0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23a3cbd5cfd8c07faf3640dee8757614f355cfe8dc68c0f7ea0950505558571,PodSandboxId:776f5812b09ee792d48ca2e656f195f066f4ccd19cd576e7e5750dfd522e7b6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727204925122691093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7bx4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e178-665b-4c5a-9420-8823b9783d98,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779b0041b60cd698d1045f8ae870776842bb36ef5553884956cf806ea012d800,PodSandboxId:1d20bab8a8dae2063259fed1041135dcf4d258a8e572f63fc8acbaf54f842881,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727204925073837932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: cc00e955-ea40-4af1-8f79-6edffd374dec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1df74ab6f5ff072ac4f7f36212d6685e8eb667ae625599b24aaca12983220e9f,PodSandboxId:e81fc3310529b35cdf216007300d33b72d2bc983a1a13b0bda131398ca529a26,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727204883579640736,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5hztc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 059d1be7-4667-4517-82d5-e3979afead26,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c10e265d8db30ac50ab51c5979870ea882d6606e81d05528a47757723a07eb4,PodSandboxId:667dfa7f0b277a67bc65b35fe1c83dfc39ad019223f3b2011d4fd784becbf57f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727204883580089194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4sr25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f923c81-b7e3-49e7-bdcc
-55c9bb05c558,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214cabb794f93285db3098b6d687d6565105552e0ae66157f25b5bbdcf7e3737,PodSandboxId:701a1c92b5e7dc76e5147b108a7ee1aa501afd67549d4f9bb253210622cf5522,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727204873108027780,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50b581
7fd08e77fa481521ea07c2217,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba04f08547dac200c40e0bc1055251df174fec8d493f6ce21af90b62ecd0f189,PodSandboxId:16b7bc1ab28e4e7e840b9e525c02ab7a1964a4c1a9160f624355a33b5d594c9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727204873096083716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002b0980d8e9353de2c0ce
443f9caf6a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd329fa120f189f45a33adae45f15d67e8eb4e0773abd4026327baf270b2fd33,PodSandboxId:b388e11fd6f335b903b9201aef85ab9cf5d92dc6262f216aa449190cd18f8b49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727204873086507903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95236b1da7ce6dbd963d4e3c6daa9ec,},An
notations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ffec06dd06e522e3d734ac79bf6ef3dbc4dbf6c87fedf3635d62b17b441e9,PodSandboxId:82d82948f36c9b8ccd3efeb23862a0f339fa5b8b66d0342d7e1b6896e8db9886,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727204872936618512,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f61830de29579fef04a3c6b4c1d8b6,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eec22406-b981-4b05-8aca-b7f22898701b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.498456132Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=af2b573a-49ed-4f23-997d-614a084c0fbe name=/runtime.v1.RuntimeService/Version
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.498910444Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=af2b573a-49ed-4f23-997d-614a084c0fbe name=/runtime.v1.RuntimeService/Version
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.500562807Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d134efd6-568d-46e2-bea7-0793aa44059c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.501113811Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205395501085895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d134efd6-568d-46e2-bea7-0793aa44059c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.501784892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aca24c5f-f046-46ad-9984-51a5d13cbe43 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.501876917Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aca24c5f-f046-46ad-9984-51a5d13cbe43 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.502294927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:debeb8627dc23b7ca74a4c6264401f9f6051c646bb86f8820ddff990470ce7aa,PodSandboxId:d411e6764001804f39246031c5e9769a6069a12cb0fae9241ea76823147c39fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727205329423555433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b22dm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a640a461-f615-435d-9663-c7530e95b0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4e133113611389db3812d520537ee26cebdd0164468a296d13b4be5b35b08,PodSandboxId:88ce5911c653068f50d29c6f8e4d5117b0e629a08150dadb342ecbdfc1eac1ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727205295910154114,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5hztc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 059d1be7-4667-4517-82d5-e3979afead26,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba242595624ecee29081d99f5217488673a44d80057bad90e1a3c9d78809cb71,PodSandboxId:6f80a6fc6a01c2016be2839c5d810f30bd447cd723b4411ab6acf89eebc14a08,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727205295861414182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7bx4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e178-665b-4c5a-9420-8823b9783d98,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34dc96b8e02fa5c03c146b714e6083ba36d74fc9fc054e98eab2394a6e66cc6d,PodSandboxId:7c938e3431dac55dd7abbaf8f4e0b9c2ae99165f7946fa540d619a89ee3fc836,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727205295693893583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc00e955-ea40-4af1-8f79-6edffd374dec,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12f0d3f81d7d524523e95674487533c04331b81e01204a9df8ec84e4af7db9e8,PodSandboxId:c60043af4455b5799d444f464e442c63eef454b7180d6ee04ea4f4dc729d102b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727205295691024519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4sr25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f923c81-b7e3-49e7-bdcc-55c9bb05c558,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8646faab03b02f4a67d2a6dc643b844e5704cb9bde2c74e7744d30a92f062bd,PodSandboxId:1029ee7cd042f33247d0ac45159b80168b7a9c6beeb043cfd55a33cdabd368c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727205291862320434,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95236b1da7ce6dbd963d4e3c6daa9ec,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297f0f1f9170a8f456457d2938156bd59a07ef767f1a5bacc66d775a256d8fb4,PodSandboxId:2eba9b87ef6211bbd9ff281f75364f797016233e1ba4c28c1e055b12593fc3f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727205291800241393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002b0980d8e9353de2c0ce443f9caf6a,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e4ba44e34277911cc6a8014b0bd52f2d3688ead57d591eab0093d823533ef1,PodSandboxId:7b5f19cd9cde3b5a07fb5ee5c74fcf51f9199a1bed3aa626e99f239536c99352,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727205291815589829,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50b5817fd08e77fa481521ea07c2217,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0310cb335531e8803bdc332f5ede315f64cc6cf92f2caff23f038eca744d3bdc,PodSandboxId:87d3d012af8d41b5643c63c49b49f94f1756fbd09e86972bc90acca6b414a9e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727205291798161388,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f61830de29579fef04a3c6b4c1d8b6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702eb68e64dd378a357b7b5eb48c13fbf285b9a0488562875e5369d2cc2e684d,PodSandboxId:49e25cda6a219ddf5332ce53b0a67b3e0df90c36ad6b18fe7bee0b9a13ef2ac5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727204976052370520,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b22dm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a640a461-f615-435d-9663-c7530e95b0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23a3cbd5cfd8c07faf3640dee8757614f355cfe8dc68c0f7ea0950505558571,PodSandboxId:776f5812b09ee792d48ca2e656f195f066f4ccd19cd576e7e5750dfd522e7b6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727204925122691093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7bx4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e178-665b-4c5a-9420-8823b9783d98,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779b0041b60cd698d1045f8ae870776842bb36ef5553884956cf806ea012d800,PodSandboxId:1d20bab8a8dae2063259fed1041135dcf4d258a8e572f63fc8acbaf54f842881,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727204925073837932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: cc00e955-ea40-4af1-8f79-6edffd374dec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1df74ab6f5ff072ac4f7f36212d6685e8eb667ae625599b24aaca12983220e9f,PodSandboxId:e81fc3310529b35cdf216007300d33b72d2bc983a1a13b0bda131398ca529a26,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727204883579640736,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5hztc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 059d1be7-4667-4517-82d5-e3979afead26,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c10e265d8db30ac50ab51c5979870ea882d6606e81d05528a47757723a07eb4,PodSandboxId:667dfa7f0b277a67bc65b35fe1c83dfc39ad019223f3b2011d4fd784becbf57f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727204883580089194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4sr25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f923c81-b7e3-49e7-bdcc
-55c9bb05c558,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214cabb794f93285db3098b6d687d6565105552e0ae66157f25b5bbdcf7e3737,PodSandboxId:701a1c92b5e7dc76e5147b108a7ee1aa501afd67549d4f9bb253210622cf5522,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727204873108027780,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50b581
7fd08e77fa481521ea07c2217,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba04f08547dac200c40e0bc1055251df174fec8d493f6ce21af90b62ecd0f189,PodSandboxId:16b7bc1ab28e4e7e840b9e525c02ab7a1964a4c1a9160f624355a33b5d594c9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727204873096083716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002b0980d8e9353de2c0ce
443f9caf6a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd329fa120f189f45a33adae45f15d67e8eb4e0773abd4026327baf270b2fd33,PodSandboxId:b388e11fd6f335b903b9201aef85ab9cf5d92dc6262f216aa449190cd18f8b49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727204873086507903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95236b1da7ce6dbd963d4e3c6daa9ec,},An
notations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ffec06dd06e522e3d734ac79bf6ef3dbc4dbf6c87fedf3635d62b17b441e9,PodSandboxId:82d82948f36c9b8ccd3efeb23862a0f339fa5b8b66d0342d7e1b6896e8db9886,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727204872936618512,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f61830de29579fef04a3c6b4c1d8b6,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aca24c5f-f046-46ad-9984-51a5d13cbe43 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.541421733Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=52e7b7fd-301c-44ec-8f5c-cbe479cfc18b name=/runtime.v1.RuntimeService/Version
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.541499312Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=52e7b7fd-301c-44ec-8f5c-cbe479cfc18b name=/runtime.v1.RuntimeService/Version
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.542757248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=792dbbbf-cb4a-4b49-ab03-42fb61427f50 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.543212887Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205395543187678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=792dbbbf-cb4a-4b49-ab03-42fb61427f50 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.543848656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63632857-088d-4fd7-a9b9-993cf494c9fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.543946692Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63632857-088d-4fd7-a9b9-993cf494c9fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:16:35 multinode-624105 crio[2691]: time="2024-09-24 19:16:35.544809367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:debeb8627dc23b7ca74a4c6264401f9f6051c646bb86f8820ddff990470ce7aa,PodSandboxId:d411e6764001804f39246031c5e9769a6069a12cb0fae9241ea76823147c39fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727205329423555433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b22dm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a640a461-f615-435d-9663-c7530e95b0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4e133113611389db3812d520537ee26cebdd0164468a296d13b4be5b35b08,PodSandboxId:88ce5911c653068f50d29c6f8e4d5117b0e629a08150dadb342ecbdfc1eac1ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727205295910154114,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5hztc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 059d1be7-4667-4517-82d5-e3979afead26,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba242595624ecee29081d99f5217488673a44d80057bad90e1a3c9d78809cb71,PodSandboxId:6f80a6fc6a01c2016be2839c5d810f30bd447cd723b4411ab6acf89eebc14a08,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727205295861414182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7bx4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e178-665b-4c5a-9420-8823b9783d98,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34dc96b8e02fa5c03c146b714e6083ba36d74fc9fc054e98eab2394a6e66cc6d,PodSandboxId:7c938e3431dac55dd7abbaf8f4e0b9c2ae99165f7946fa540d619a89ee3fc836,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727205295693893583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc00e955-ea40-4af1-8f79-6edffd374dec,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12f0d3f81d7d524523e95674487533c04331b81e01204a9df8ec84e4af7db9e8,PodSandboxId:c60043af4455b5799d444f464e442c63eef454b7180d6ee04ea4f4dc729d102b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727205295691024519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4sr25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f923c81-b7e3-49e7-bdcc-55c9bb05c558,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8646faab03b02f4a67d2a6dc643b844e5704cb9bde2c74e7744d30a92f062bd,PodSandboxId:1029ee7cd042f33247d0ac45159b80168b7a9c6beeb043cfd55a33cdabd368c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727205291862320434,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95236b1da7ce6dbd963d4e3c6daa9ec,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297f0f1f9170a8f456457d2938156bd59a07ef767f1a5bacc66d775a256d8fb4,PodSandboxId:2eba9b87ef6211bbd9ff281f75364f797016233e1ba4c28c1e055b12593fc3f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727205291800241393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002b0980d8e9353de2c0ce443f9caf6a,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e4ba44e34277911cc6a8014b0bd52f2d3688ead57d591eab0093d823533ef1,PodSandboxId:7b5f19cd9cde3b5a07fb5ee5c74fcf51f9199a1bed3aa626e99f239536c99352,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727205291815589829,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50b5817fd08e77fa481521ea07c2217,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0310cb335531e8803bdc332f5ede315f64cc6cf92f2caff23f038eca744d3bdc,PodSandboxId:87d3d012af8d41b5643c63c49b49f94f1756fbd09e86972bc90acca6b414a9e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727205291798161388,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f61830de29579fef04a3c6b4c1d8b6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702eb68e64dd378a357b7b5eb48c13fbf285b9a0488562875e5369d2cc2e684d,PodSandboxId:49e25cda6a219ddf5332ce53b0a67b3e0df90c36ad6b18fe7bee0b9a13ef2ac5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727204976052370520,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b22dm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a640a461-f615-435d-9663-c7530e95b0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23a3cbd5cfd8c07faf3640dee8757614f355cfe8dc68c0f7ea0950505558571,PodSandboxId:776f5812b09ee792d48ca2e656f195f066f4ccd19cd576e7e5750dfd522e7b6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727204925122691093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7bx4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e178-665b-4c5a-9420-8823b9783d98,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779b0041b60cd698d1045f8ae870776842bb36ef5553884956cf806ea012d800,PodSandboxId:1d20bab8a8dae2063259fed1041135dcf4d258a8e572f63fc8acbaf54f842881,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727204925073837932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: cc00e955-ea40-4af1-8f79-6edffd374dec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1df74ab6f5ff072ac4f7f36212d6685e8eb667ae625599b24aaca12983220e9f,PodSandboxId:e81fc3310529b35cdf216007300d33b72d2bc983a1a13b0bda131398ca529a26,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727204883579640736,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5hztc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 059d1be7-4667-4517-82d5-e3979afead26,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c10e265d8db30ac50ab51c5979870ea882d6606e81d05528a47757723a07eb4,PodSandboxId:667dfa7f0b277a67bc65b35fe1c83dfc39ad019223f3b2011d4fd784becbf57f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727204883580089194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4sr25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f923c81-b7e3-49e7-bdcc
-55c9bb05c558,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214cabb794f93285db3098b6d687d6565105552e0ae66157f25b5bbdcf7e3737,PodSandboxId:701a1c92b5e7dc76e5147b108a7ee1aa501afd67549d4f9bb253210622cf5522,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727204873108027780,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50b581
7fd08e77fa481521ea07c2217,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba04f08547dac200c40e0bc1055251df174fec8d493f6ce21af90b62ecd0f189,PodSandboxId:16b7bc1ab28e4e7e840b9e525c02ab7a1964a4c1a9160f624355a33b5d594c9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727204873096083716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002b0980d8e9353de2c0ce
443f9caf6a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd329fa120f189f45a33adae45f15d67e8eb4e0773abd4026327baf270b2fd33,PodSandboxId:b388e11fd6f335b903b9201aef85ab9cf5d92dc6262f216aa449190cd18f8b49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727204873086507903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95236b1da7ce6dbd963d4e3c6daa9ec,},An
notations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ffec06dd06e522e3d734ac79bf6ef3dbc4dbf6c87fedf3635d62b17b441e9,PodSandboxId:82d82948f36c9b8ccd3efeb23862a0f339fa5b8b66d0342d7e1b6896e8db9886,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727204872936618512,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f61830de29579fef04a3c6b4c1d8b6,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63632857-088d-4fd7-a9b9-993cf494c9fc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	debeb8627dc23       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   d411e67640018       busybox-7dff88458-b22dm
	6ee4e13311361       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   88ce5911c6530       kindnet-5hztc
	ba242595624ec       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   6f80a6fc6a01c       coredns-7c65d6cfc9-7bx4l
	34dc96b8e02fa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   7c938e3431dac       storage-provisioner
	12f0d3f81d7d5       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   c60043af4455b       kube-proxy-4sr25
	c8646faab03b0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   1029ee7cd042f       kube-apiserver-multinode-624105
	63e4ba44e3427       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   7b5f19cd9cde3       kube-controller-manager-multinode-624105
	297f0f1f9170a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   2eba9b87ef621       kube-scheduler-multinode-624105
	0310cb335531e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   87d3d012af8d4       etcd-multinode-624105
	702eb68e64dd3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   49e25cda6a219       busybox-7dff88458-b22dm
	c23a3cbd5cfd8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      7 minutes ago        Exited              coredns                   0                   776f5812b09ee       coredns-7c65d6cfc9-7bx4l
	779b0041b60cd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   1d20bab8a8dae       storage-provisioner
	5c10e265d8db3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   667dfa7f0b277       kube-proxy-4sr25
	1df74ab6f5ff0       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   e81fc3310529b       kindnet-5hztc
	214cabb794f93       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   701a1c92b5e7d       kube-controller-manager-multinode-624105
	ba04f08547dac       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   16b7bc1ab28e4       kube-scheduler-multinode-624105
	cd329fa120f18       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   b388e11fd6f33       kube-apiserver-multinode-624105
	ca9ffec06dd06       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   82d82948f36c9       etcd-multinode-624105
	
	
	==> coredns [ba242595624ecee29081d99f5217488673a44d80057bad90e1a3c9d78809cb71] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35556 - 10213 "HINFO IN 3524561998326851029.7241639463776507848. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026108159s
	
	
	==> coredns [c23a3cbd5cfd8c07faf3640dee8757614f355cfe8dc68c0f7ea0950505558571] <==
	[INFO] 10.244.1.2:41910 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002005567s
	[INFO] 10.244.1.2:37518 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000236585s
	[INFO] 10.244.1.2:48076 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100055s
	[INFO] 10.244.1.2:54152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001322878s
	[INFO] 10.244.1.2:51190 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090554s
	[INFO] 10.244.1.2:55650 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007019s
	[INFO] 10.244.1.2:57655 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011142s
	[INFO] 10.244.0.3:52016 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167023s
	[INFO] 10.244.0.3:46783 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000066374s
	[INFO] 10.244.0.3:52883 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000054553s
	[INFO] 10.244.0.3:55160 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078248s
	[INFO] 10.244.1.2:51362 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179033s
	[INFO] 10.244.1.2:54474 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133922s
	[INFO] 10.244.1.2:57810 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104456s
	[INFO] 10.244.1.2:35217 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000193442s
	[INFO] 10.244.0.3:58692 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019231s
	[INFO] 10.244.0.3:40896 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141448s
	[INFO] 10.244.0.3:40362 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000227151s
	[INFO] 10.244.0.3:49887 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095212s
	[INFO] 10.244.1.2:36728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000223817s
	[INFO] 10.244.1.2:44161 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000133802s
	[INFO] 10.244.1.2:52014 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00019975s
	[INFO] 10.244.1.2:37081 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111589s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-624105
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-624105
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=multinode-624105
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T19_07_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 19:07:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-624105
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 19:16:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 19:14:54 +0000   Tue, 24 Sep 2024 19:07:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 19:14:54 +0000   Tue, 24 Sep 2024 19:07:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 19:14:54 +0000   Tue, 24 Sep 2024 19:07:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 19:14:54 +0000   Tue, 24 Sep 2024 19:08:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    multinode-624105
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 105bf671b3ed4f6e882b36bc7c330a73
	  System UUID:                105bf671-b3ed-4f6e-882b-36bc7c330a73
	  Boot ID:                    c1b43a78-f120-43d0-b77c-4cfca1797fa7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-b22dm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m2s
	  kube-system                 coredns-7c65d6cfc9-7bx4l                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m32s
	  kube-system                 etcd-multinode-624105                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m37s
	  kube-system                 kindnet-5hztc                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m33s
	  kube-system                 kube-apiserver-multinode-624105             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 kube-controller-manager-multinode-624105    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 kube-proxy-4sr25                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 kube-scheduler-multinode-624105             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m31s                  kube-proxy       
	  Normal  Starting                 99s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  8m43s (x8 over 8m43s)  kubelet          Node multinode-624105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m43s (x8 over 8m43s)  kubelet          Node multinode-624105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m43s (x7 over 8m43s)  kubelet          Node multinode-624105 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m38s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m37s                  kubelet          Node multinode-624105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m37s                  kubelet          Node multinode-624105 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  8m37s                  kubelet          Node multinode-624105 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           8m33s                  node-controller  Node multinode-624105 event: Registered Node multinode-624105 in Controller
	  Normal  NodeReady                7m51s                  kubelet          Node multinode-624105 status is now: NodeReady
	  Normal  Starting                 104s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)    kubelet          Node multinode-624105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)    kubelet          Node multinode-624105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)    kubelet          Node multinode-624105 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           97s                    node-controller  Node multinode-624105 event: Registered Node multinode-624105 in Controller
	
	
	Name:               multinode-624105-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-624105-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=multinode-624105
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T19_15_36_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 19:15:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-624105-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 19:16:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 19:16:06 +0000   Tue, 24 Sep 2024 19:15:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 19:16:06 +0000   Tue, 24 Sep 2024 19:15:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 19:16:06 +0000   Tue, 24 Sep 2024 19:15:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 19:16:06 +0000   Tue, 24 Sep 2024 19:15:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    multinode-624105-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 57aa8182c1a84d5c991ebf30059fa699
	  System UUID:                57aa8182-c1a8-4d5c-991e-bf30059fa699
	  Boot ID:                    f1eb433a-4b82-40b9-96fb-7a46ea2ec550
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ln4qn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kindnet-prfnr              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m22s
	  kube-system                 kube-proxy-wp4bg           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m19s                  kube-proxy  
	  Normal  Starting                 55s                    kube-proxy  
	  Normal  NodeAllocatableEnforced  7m23s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m22s (x2 over 7m23s)  kubelet     Node multinode-624105-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m22s (x2 over 7m23s)  kubelet     Node multinode-624105-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m22s (x2 over 7m23s)  kubelet     Node multinode-624105-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m4s                   kubelet     Node multinode-624105-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  60s (x2 over 60s)      kubelet     Node multinode-624105-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x2 over 60s)      kubelet     Node multinode-624105-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x2 over 60s)      kubelet     Node multinode-624105-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  60s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-624105-m02 status is now: NodeReady
	
	
	Name:               multinode-624105-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-624105-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=multinode-624105
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T19_16_14_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 19:16:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-624105-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 19:16:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 19:16:32 +0000   Tue, 24 Sep 2024 19:16:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 19:16:32 +0000   Tue, 24 Sep 2024 19:16:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 19:16:32 +0000   Tue, 24 Sep 2024 19:16:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 19:16:32 +0000   Tue, 24 Sep 2024 19:16:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.64
	  Hostname:    multinode-624105-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 656ff59dd6c04417be6b4cea07921ecd
	  System UUID:                656ff59d-d6c0-4417-be6b-4cea07921ecd
	  Boot ID:                    3b4a222d-4c2b-41dd-8508-7dbbb6502819
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hl4xz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m31s
	  kube-system                 kube-proxy-d2292    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m39s                  kube-proxy       
	  Normal  Starting                 6m26s                  kube-proxy       
	  Normal  Starting                 17s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m31s (x2 over 6m31s)  kubelet          Node multinode-624105-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s (x2 over 6m31s)  kubelet          Node multinode-624105-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m31s (x2 over 6m31s)  kubelet          Node multinode-624105-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m11s                  kubelet          Node multinode-624105-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m42s (x2 over 5m42s)  kubelet          Node multinode-624105-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m42s (x2 over 5m42s)  kubelet          Node multinode-624105-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m42s (x2 over 5m42s)  kubelet          Node multinode-624105-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m24s                  kubelet          Node multinode-624105-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet          Node multinode-624105-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet          Node multinode-624105-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet          Node multinode-624105-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                    node-controller  Node multinode-624105-m03 event: Registered Node multinode-624105-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-624105-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.058731] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062782] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.144551] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.139600] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.263591] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.552997] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.695499] systemd-fstab-generator[873]: Ignoring "noauto" option for root device
	[  +0.054418] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.989191] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.074164] kauditd_printk_skb: 69 callbacks suppressed
	[Sep24 19:08] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.104239] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[ +41.559679] kauditd_printk_skb: 69 callbacks suppressed
	[Sep24 19:09] kauditd_printk_skb: 12 callbacks suppressed
	[Sep24 19:14] systemd-fstab-generator[2616]: Ignoring "noauto" option for root device
	[  +0.141657] systemd-fstab-generator[2628]: Ignoring "noauto" option for root device
	[  +0.160737] systemd-fstab-generator[2642]: Ignoring "noauto" option for root device
	[  +0.140339] systemd-fstab-generator[2654]: Ignoring "noauto" option for root device
	[  +0.281769] systemd-fstab-generator[2682]: Ignoring "noauto" option for root device
	[  +0.640557] systemd-fstab-generator[2777]: Ignoring "noauto" option for root device
	[  +2.092155] systemd-fstab-generator[2894]: Ignoring "noauto" option for root device
	[  +4.656845] kauditd_printk_skb: 184 callbacks suppressed
	[Sep24 19:15] systemd-fstab-generator[3730]: Ignoring "noauto" option for root device
	[  +0.093524] kauditd_printk_skb: 34 callbacks suppressed
	[ +17.825745] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [0310cb335531e8803bdc332f5ede315f64cc6cf92f2caff23f038eca744d3bdc] <==
	{"level":"info","ts":"2024-09-24T19:14:52.242058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 switched to configuration voters=(10182824043138087653)"}
	{"level":"info","ts":"2024-09-24T19:14:52.243781Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b0723a440b02124","local-member-id":"8d50a8842d8d7ae5","added-peer-id":"8d50a8842d8d7ae5","added-peer-peer-urls":["https://192.168.39.206:2380"]}
	{"level":"info","ts":"2024-09-24T19:14:52.243914Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0723a440b02124","local-member-id":"8d50a8842d8d7ae5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:14:52.243961Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:14:52.286627Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-24T19:14:52.286877Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8d50a8842d8d7ae5","initial-advertise-peer-urls":["https://192.168.39.206:2380"],"listen-peer-urls":["https://192.168.39.206:2380"],"advertise-client-urls":["https://192.168.39.206:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.206:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T19:14:52.286926Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-24T19:14:52.287028Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2024-09-24T19:14:52.287055Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2024-09-24T19:14:53.580231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-24T19:14:53.580288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-24T19:14:53.580359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgPreVoteResp from 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2024-09-24T19:14:53.580381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became candidate at term 3"}
	{"level":"info","ts":"2024-09-24T19:14:53.580387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgVoteResp from 8d50a8842d8d7ae5 at term 3"}
	{"level":"info","ts":"2024-09-24T19:14:53.580396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became leader at term 3"}
	{"level":"info","ts":"2024-09-24T19:14:53.580403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8d50a8842d8d7ae5 elected leader 8d50a8842d8d7ae5 at term 3"}
	{"level":"info","ts":"2024-09-24T19:14:53.585386Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8d50a8842d8d7ae5","local-member-attributes":"{Name:multinode-624105 ClientURLs:[https://192.168.39.206:2379]}","request-path":"/0/members/8d50a8842d8d7ae5/attributes","cluster-id":"b0723a440b02124","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T19:14:53.585510Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:14:53.585544Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:14:53.585894Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T19:14:53.585921Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T19:14:53.586624Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:14:53.586758Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:14:53.587515Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.206:2379"}
	{"level":"info","ts":"2024-09-24T19:14:53.587538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [ca9ffec06dd06e522e3d734ac79bf6ef3dbc4dbf6c87fedf3635d62b17b441e9] <==
	{"level":"info","ts":"2024-09-24T19:07:54.114375Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:07:54.114532Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0723a440b02124","local-member-id":"8d50a8842d8d7ae5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:07:54.114640Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:07:54.114671Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:07:54.114694Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T19:07:54.114713Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T19:07:54.114721Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:07:54.115441Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:07:54.117032Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T19:07:54.117291Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:07:54.118028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.206:2379"}
	{"level":"warn","ts":"2024-09-24T19:09:12.985468Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.898148ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8855644931764805458 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:7ae592256ed45f51>","response":"size:41"}
	{"level":"info","ts":"2024-09-24T19:09:20.890078Z","caller":"traceutil/trace.go:171","msg":"trace[1565786653] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"205.882799ms","start":"2024-09-24T19:09:20.684170Z","end":"2024-09-24T19:09:20.890053Z","steps":["trace[1565786653] 'process raft request'  (duration: 205.783285ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T19:10:04.490101Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.692187ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8855644931764805950 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-624105-m03.17f843ccdbf8af52\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-624105-m03.17f843ccdbf8af52\" value_size:642 lease:8855644931764805457 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-24T19:10:04.490312Z","caller":"traceutil/trace.go:171","msg":"trace[1025340962] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"233.684519ms","start":"2024-09-24T19:10:04.256609Z","end":"2024-09-24T19:10:04.490293Z","steps":["trace[1025340962] 'process raft request'  (duration: 74.540529ms)","trace[1025340962] 'compare'  (duration: 158.606401ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T19:13:16.281374Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-24T19:13:16.281430Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-624105","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.206:2380"],"advertise-client-urls":["https://192.168.39.206:2379"]}
	{"level":"warn","ts":"2024-09-24T19:13:16.281510Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T19:13:16.281589Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T19:13:16.335710Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.206:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T19:13:16.335759Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.206:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-24T19:13:16.335811Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8d50a8842d8d7ae5","current-leader-member-id":"8d50a8842d8d7ae5"}
	{"level":"info","ts":"2024-09-24T19:13:16.338446Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2024-09-24T19:13:16.338621Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2024-09-24T19:13:16.338643Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-624105","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.206:2380"],"advertise-client-urls":["https://192.168.39.206:2379"]}
	
	
	==> kernel <==
	 19:16:35 up 9 min,  0 users,  load average: 0.09, 0.16, 0.11
	Linux multinode-624105 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1df74ab6f5ff072ac4f7f36212d6685e8eb667ae625599b24aaca12983220e9f] <==
	I0924 19:12:34.478736       1 main.go:322] Node multinode-624105-m03 has CIDR [10.244.3.0/24] 
	I0924 19:12:44.469142       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:12:44.469245       1 main.go:299] handling current node
	I0924 19:12:44.469274       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:12:44.469293       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:12:44.469477       1 main.go:295] Handling node with IPs: map[192.168.39.64:{}]
	I0924 19:12:44.469508       1 main.go:322] Node multinode-624105-m03 has CIDR [10.244.3.0/24] 
	I0924 19:12:54.469711       1 main.go:295] Handling node with IPs: map[192.168.39.64:{}]
	I0924 19:12:54.469813       1 main.go:322] Node multinode-624105-m03 has CIDR [10.244.3.0/24] 
	I0924 19:12:54.469939       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:12:54.469961       1 main.go:299] handling current node
	I0924 19:12:54.469984       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:12:54.469999       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:13:04.469164       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:13:04.469215       1 main.go:299] handling current node
	I0924 19:13:04.469230       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:13:04.469235       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:13:04.469414       1 main.go:295] Handling node with IPs: map[192.168.39.64:{}]
	I0924 19:13:04.469436       1 main.go:322] Node multinode-624105-m03 has CIDR [10.244.3.0/24] 
	I0924 19:13:14.472124       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:13:14.472185       1 main.go:299] handling current node
	I0924 19:13:14.472209       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:13:14.472215       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:13:14.472311       1 main.go:295] Handling node with IPs: map[192.168.39.64:{}]
	I0924 19:13:14.472385       1 main.go:322] Node multinode-624105-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [6ee4e133113611389db3812d520537ee26cebdd0164468a296d13b4be5b35b08] <==
	I0924 19:15:46.665118       1 main.go:322] Node multinode-624105-m03 has CIDR [10.244.3.0/24] 
	I0924 19:15:56.665314       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:15:56.665428       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:15:56.665565       1 main.go:295] Handling node with IPs: map[192.168.39.64:{}]
	I0924 19:15:56.665587       1 main.go:322] Node multinode-624105-m03 has CIDR [10.244.3.0/24] 
	I0924 19:15:56.665633       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:15:56.665648       1 main.go:299] handling current node
	I0924 19:16:06.665658       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:16:06.665708       1 main.go:299] handling current node
	I0924 19:16:06.665722       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:16:06.665727       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:16:06.665904       1 main.go:295] Handling node with IPs: map[192.168.39.64:{}]
	I0924 19:16:06.665926       1 main.go:322] Node multinode-624105-m03 has CIDR [10.244.3.0/24] 
	I0924 19:16:16.666982       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:16:16.667029       1 main.go:299] handling current node
	I0924 19:16:16.667048       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:16:16.667056       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:16:16.667275       1 main.go:295] Handling node with IPs: map[192.168.39.64:{}]
	I0924 19:16:16.667303       1 main.go:322] Node multinode-624105-m03 has CIDR [10.244.2.0/24] 
	I0924 19:16:26.665375       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:16:26.665413       1 main.go:299] handling current node
	I0924 19:16:26.665427       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:16:26.665432       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:16:26.665560       1 main.go:295] Handling node with IPs: map[192.168.39.64:{}]
	I0924 19:16:26.665576       1 main.go:322] Node multinode-624105-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [c8646faab03b02f4a67d2a6dc643b844e5704cb9bde2c74e7744d30a92f062bd] <==
	I0924 19:14:54.708149       1 policy_source.go:224] refreshing policies
	I0924 19:14:54.727495       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0924 19:14:54.728676       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0924 19:14:54.728703       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0924 19:14:54.734623       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0924 19:14:54.734906       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0924 19:14:54.735607       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0924 19:14:54.735785       1 shared_informer.go:320] Caches are synced for configmaps
	I0924 19:14:54.736808       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0924 19:14:54.739207       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0924 19:14:54.744579       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0924 19:14:54.744739       1 aggregator.go:171] initial CRD sync complete...
	I0924 19:14:54.744771       1 autoregister_controller.go:144] Starting autoregister controller
	I0924 19:14:54.744793       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0924 19:14:54.744815       1 cache.go:39] Caches are synced for autoregister controller
	E0924 19:14:54.756299       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0924 19:14:54.788947       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0924 19:14:55.650866       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0924 19:14:56.647758       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0924 19:14:56.777623       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0924 19:14:56.794311       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0924 19:14:56.878868       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 19:14:56.885968       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0924 19:14:58.232929       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 19:14:58.334089       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [cd329fa120f189f45a33adae45f15d67e8eb4e0773abd4026327baf270b2fd33] <==
	I0924 19:07:56.214068       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0924 19:07:56.218621       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0924 19:07:56.218654       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0924 19:07:56.794590       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 19:07:56.838167       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0924 19:07:56.934779       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0924 19:07:56.940300       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.206]
	I0924 19:07:56.941123       1 controller.go:615] quota admission added evaluator for: endpoints
	I0924 19:07:56.944912       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 19:07:57.288956       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0924 19:07:58.067299       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0924 19:07:58.087139       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0924 19:07:58.103768       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0924 19:08:02.637146       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0924 19:08:02.937596       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0924 19:09:37.390091       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35286: use of closed network connection
	E0924 19:09:37.551590       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35308: use of closed network connection
	E0924 19:09:37.739795       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35314: use of closed network connection
	E0924 19:09:37.901745       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35324: use of closed network connection
	E0924 19:09:38.073095       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35330: use of closed network connection
	E0924 19:09:38.236102       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35342: use of closed network connection
	E0924 19:09:38.506696       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35358: use of closed network connection
	E0924 19:09:38.660508       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35368: use of closed network connection
	E0924 19:09:38.997708       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35410: use of closed network connection
	I0924 19:13:16.280149       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [214cabb794f93285db3098b6d687d6565105552e0ae66157f25b5bbdcf7e3737] <==
	I0924 19:10:52.268973       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:10:52.269146       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-624105-m02"
	I0924 19:10:53.329936       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-624105-m03\" does not exist"
	I0924 19:10:53.330027       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-624105-m02"
	I0924 19:10:53.345210       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-624105-m03" podCIDRs=["10.244.3.0/24"]
	I0924 19:10:53.345281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:10:53.345359       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:10:53.359047       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:10:53.766711       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:10:54.081975       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:10:57.139657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:11:03.664016       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:11:11.166948       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-624105-m02"
	I0924 19:11:11.167051       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:11:11.183020       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:11:12.078584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:11:57.098460       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-624105-m03"
	I0924 19:11:57.098740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m02"
	I0924 19:11:57.100814       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:11:57.120559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m02"
	I0924 19:11:57.120808       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:11:57.154394       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.471165ms"
	I0924 19:11:57.154531       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.596µs"
	I0924 19:12:02.163865       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m02"
	I0924 19:12:12.235078       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	
	
	==> kube-controller-manager [63e4ba44e34277911cc6a8014b0bd52f2d3688ead57d591eab0093d823533ef1] <==
	I0924 19:15:54.703052       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-624105-m02"
	I0924 19:15:54.714547       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m02"
	I0924 19:15:54.722361       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="70.636µs"
	I0924 19:15:54.753263       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.84µs"
	I0924 19:15:57.949425       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.408066ms"
	I0924 19:15:57.949638       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.389µs"
	I0924 19:15:58.238098       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m02"
	I0924 19:16:06.489908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m02"
	I0924 19:16:12.274051       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:12.288261       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:12.499001       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:12.499608       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-624105-m02"
	I0924 19:16:13.504570       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-624105-m03\" does not exist"
	I0924 19:16:13.504646       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-624105-m02"
	I0924 19:16:13.525674       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-624105-m03" podCIDRs=["10.244.2.0/24"]
	I0924 19:16:13.525715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:13.525735       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:13.944465       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:14.262858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:18.346844       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:23.648109       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:32.444289       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:32.444513       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-624105-m02"
	I0924 19:16:32.455263       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:33.255953       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	
	
	==> kube-proxy [12f0d3f81d7d524523e95674487533c04331b81e01204a9df8ec84e4af7db9e8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 19:14:56.045447       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 19:14:56.055734       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.206"]
	E0924 19:14:56.055808       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 19:14:56.108730       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 19:14:56.108781       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 19:14:56.108807       1 server_linux.go:169] "Using iptables Proxier"
	I0924 19:14:56.111536       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 19:14:56.111827       1 server.go:483] "Version info" version="v1.31.1"
	I0924 19:14:56.112104       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:14:56.113202       1 config.go:199] "Starting service config controller"
	I0924 19:14:56.113428       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 19:14:56.113518       1 config.go:105] "Starting endpoint slice config controller"
	I0924 19:14:56.113544       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 19:14:56.113962       1 config.go:328] "Starting node config controller"
	I0924 19:14:56.113997       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 19:14:56.214481       1 shared_informer.go:320] Caches are synced for node config
	I0924 19:14:56.214569       1 shared_informer.go:320] Caches are synced for service config
	I0924 19:14:56.214580       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [5c10e265d8db30ac50ab51c5979870ea882d6606e81d05528a47757723a07eb4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 19:08:04.017968       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 19:08:04.102710       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.206"]
	E0924 19:08:04.102925       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 19:08:04.360294       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 19:08:04.360946       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 19:08:04.361049       1 server_linux.go:169] "Using iptables Proxier"
	I0924 19:08:04.366546       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 19:08:04.367472       1 server.go:483] "Version info" version="v1.31.1"
	I0924 19:08:04.367704       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:08:04.372893       1 config.go:199] "Starting service config controller"
	I0924 19:08:04.374819       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 19:08:04.373152       1 config.go:105] "Starting endpoint slice config controller"
	I0924 19:08:04.374874       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 19:08:04.374888       1 config.go:328] "Starting node config controller"
	I0924 19:08:04.374905       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 19:08:04.474985       1 shared_informer.go:320] Caches are synced for node config
	I0924 19:08:04.475038       1 shared_informer.go:320] Caches are synced for service config
	I0924 19:08:04.475068       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [297f0f1f9170a8f456457d2938156bd59a07ef767f1a5bacc66d775a256d8fb4] <==
	I0924 19:14:53.253368       1 serving.go:386] Generated self-signed cert in-memory
	W0924 19:14:54.674691       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 19:14:54.674835       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 19:14:54.675013       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 19:14:54.675039       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 19:14:54.744360       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0924 19:14:54.744979       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:14:54.752689       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0924 19:14:54.753650       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0924 19:14:54.753678       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 19:14:54.753700       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0924 19:14:54.853931       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ba04f08547dac200c40e0bc1055251df174fec8d493f6ce21af90b62ecd0f189] <==
	E0924 19:07:55.330156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:55.328667       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 19:07:55.330258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:55.326730       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 19:07:55.330405       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 19:07:55.326942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 19:07:55.330508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:56.177752       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 19:07:56.177863       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 19:07:56.355728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 19:07:56.355937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:56.379838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 19:07:56.379925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:56.556159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 19:07:56.556283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:56.561896       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0924 19:07:56.562069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:56.576356       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 19:07:56.576530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:56.578593       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 19:07:56.578676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:56.580727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 19:07:56.580762       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0924 19:07:58.822917       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0924 19:13:16.290195       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 24 19:15:01 multinode-624105 kubelet[2901]: E0924 19:15:01.247951    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205301247624911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:15:01 multinode-624105 kubelet[2901]: E0924 19:15:01.247992    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205301247624911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:15:11 multinode-624105 kubelet[2901]: E0924 19:15:11.249190    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205311248886993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:15:11 multinode-624105 kubelet[2901]: E0924 19:15:11.249214    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205311248886993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:15:21 multinode-624105 kubelet[2901]: E0924 19:15:21.250805    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205321250117944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:15:21 multinode-624105 kubelet[2901]: E0924 19:15:21.251603    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205321250117944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:15:31 multinode-624105 kubelet[2901]: E0924 19:15:31.254312    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205331253948754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:15:31 multinode-624105 kubelet[2901]: E0924 19:15:31.254373    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205331253948754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:15:41 multinode-624105 kubelet[2901]: E0924 19:15:41.255984    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205341255221549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:15:41 multinode-624105 kubelet[2901]: E0924 19:15:41.257318    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205341255221549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:15:51 multinode-624105 kubelet[2901]: E0924 19:15:51.222011    2901 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 19:15:51 multinode-624105 kubelet[2901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 19:15:51 multinode-624105 kubelet[2901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 19:15:51 multinode-624105 kubelet[2901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 19:15:51 multinode-624105 kubelet[2901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 19:15:51 multinode-624105 kubelet[2901]: E0924 19:15:51.259527    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205351259081671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:15:51 multinode-624105 kubelet[2901]: E0924 19:15:51.259835    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205351259081671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:16:01 multinode-624105 kubelet[2901]: E0924 19:16:01.261504    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205361261211439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:16:01 multinode-624105 kubelet[2901]: E0924 19:16:01.261543    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205361261211439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:16:11 multinode-624105 kubelet[2901]: E0924 19:16:11.264808    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205371263642160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:16:11 multinode-624105 kubelet[2901]: E0924 19:16:11.265146    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205371263642160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:16:21 multinode-624105 kubelet[2901]: E0924 19:16:21.267801    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205381267164920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:16:21 multinode-624105 kubelet[2901]: E0924 19:16:21.267869    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205381267164920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:16:31 multinode-624105 kubelet[2901]: E0924 19:16:31.269672    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205391269166327,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:16:31 multinode-624105 kubelet[2901]: E0924 19:16:31.269917    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205391269166327,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:16:35.151868   41469 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19700-3751/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-624105 -n multinode-624105
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-624105 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (323.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 stop
E0924 19:17:24.268905   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:17:52.858621   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-624105 stop: exit status 82 (2m0.452693545s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-624105-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-624105 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-624105 status: (18.788814815s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-624105 status --alsologtostderr: (3.359799733s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-624105 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-624105 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-624105 -n multinode-624105
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-624105 logs -n 25: (1.303007707s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-624105 cp multinode-624105-m02:/home/docker/cp-test.txt                       | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105:/home/docker/cp-test_multinode-624105-m02_multinode-624105.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n multinode-624105 sudo cat                                       | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | /home/docker/cp-test_multinode-624105-m02_multinode-624105.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-624105 cp multinode-624105-m02:/home/docker/cp-test.txt                       | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m03:/home/docker/cp-test_multinode-624105-m02_multinode-624105-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n multinode-624105-m03 sudo cat                                   | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | /home/docker/cp-test_multinode-624105-m02_multinode-624105-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-624105 cp testdata/cp-test.txt                                                | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-624105 cp multinode-624105-m03:/home/docker/cp-test.txt                       | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3111081610/001/cp-test_multinode-624105-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-624105 cp multinode-624105-m03:/home/docker/cp-test.txt                       | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105:/home/docker/cp-test_multinode-624105-m03_multinode-624105.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n multinode-624105 sudo cat                                       | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | /home/docker/cp-test_multinode-624105-m03_multinode-624105.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-624105 cp multinode-624105-m03:/home/docker/cp-test.txt                       | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m02:/home/docker/cp-test_multinode-624105-m03_multinode-624105-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n multinode-624105-m02 sudo cat                                   | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | /home/docker/cp-test_multinode-624105-m03_multinode-624105-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-624105 node stop m03                                                          | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	| node    | multinode-624105 node start                                                             | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:11 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-624105                                                                | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:11 UTC |                     |
	| stop    | -p multinode-624105                                                                     | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:11 UTC |                     |
	| start   | -p multinode-624105                                                                     | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:13 UTC | 24 Sep 24 19:16 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-624105                                                                | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:16 UTC |                     |
	| node    | multinode-624105 node delete                                                            | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:16 UTC | 24 Sep 24 19:16 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-624105 stop                                                                   | multinode-624105 | jenkins | v1.34.0 | 24 Sep 24 19:16 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 19:13:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 19:13:15.483380   40358 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:13:15.483517   40358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:13:15.483527   40358 out.go:358] Setting ErrFile to fd 2...
	I0924 19:13:15.483534   40358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:13:15.483748   40358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:13:15.484327   40358 out.go:352] Setting JSON to false
	I0924 19:13:15.485213   40358 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3346,"bootTime":1727201849,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 19:13:15.485306   40358 start.go:139] virtualization: kvm guest
	I0924 19:13:15.487587   40358 out.go:177] * [multinode-624105] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 19:13:15.489076   40358 notify.go:220] Checking for updates...
	I0924 19:13:15.489103   40358 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 19:13:15.490692   40358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 19:13:15.492027   40358 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:13:15.493313   40358 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:13:15.494436   40358 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 19:13:15.495594   40358 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 19:13:15.497245   40358 config.go:182] Loaded profile config "multinode-624105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:13:15.497332   40358 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 19:13:15.497800   40358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:13:15.497840   40358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:13:15.512831   40358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46157
	I0924 19:13:15.513325   40358 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:13:15.513874   40358 main.go:141] libmachine: Using API Version  1
	I0924 19:13:15.513924   40358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:13:15.514289   40358 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:13:15.514501   40358 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:13:15.549364   40358 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 19:13:15.550456   40358 start.go:297] selected driver: kvm2
	I0924 19:13:15.550476   40358 start.go:901] validating driver "kvm2" against &{Name:multinode-624105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-624105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.64 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:13:15.550620   40358 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 19:13:15.551020   40358 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:13:15.551112   40358 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 19:13:15.566344   40358 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 19:13:15.567100   40358 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:13:15.567129   40358 cni.go:84] Creating CNI manager for ""
	I0924 19:13:15.567163   40358 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0924 19:13:15.567232   40358 start.go:340] cluster config:
	{Name:multinode-624105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-624105 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.64 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:13:15.567367   40358 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:13:15.569214   40358 out.go:177] * Starting "multinode-624105" primary control-plane node in "multinode-624105" cluster
	I0924 19:13:15.570458   40358 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:13:15.570496   40358 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 19:13:15.570508   40358 cache.go:56] Caching tarball of preloaded images
	I0924 19:13:15.570581   40358 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 19:13:15.570611   40358 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 19:13:15.570728   40358 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/config.json ...
	I0924 19:13:15.570972   40358 start.go:360] acquireMachinesLock for multinode-624105: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:13:15.571026   40358 start.go:364] duration metric: took 35.55µs to acquireMachinesLock for "multinode-624105"
	I0924 19:13:15.571045   40358 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:13:15.571054   40358 fix.go:54] fixHost starting: 
	I0924 19:13:15.571365   40358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:13:15.571408   40358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:13:15.585794   40358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
	I0924 19:13:15.586168   40358 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:13:15.586583   40358 main.go:141] libmachine: Using API Version  1
	I0924 19:13:15.586601   40358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:13:15.587009   40358 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:13:15.587199   40358 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:13:15.587328   40358 main.go:141] libmachine: (multinode-624105) Calling .GetState
	I0924 19:13:15.588651   40358 fix.go:112] recreateIfNeeded on multinode-624105: state=Running err=<nil>
	W0924 19:13:15.588669   40358 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:13:15.590420   40358 out.go:177] * Updating the running kvm2 "multinode-624105" VM ...
	I0924 19:13:15.591556   40358 machine.go:93] provisionDockerMachine start ...
	I0924 19:13:15.591572   40358 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:13:15.591734   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:13:15.593945   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.594314   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:13:15.594332   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.594500   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:13:15.594669   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:15.594798   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:15.594926   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:13:15.595076   40358 main.go:141] libmachine: Using SSH client type: native
	I0924 19:13:15.595222   40358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0924 19:13:15.595232   40358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:13:15.698975   40358 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-624105
	
	I0924 19:13:15.699006   40358 main.go:141] libmachine: (multinode-624105) Calling .GetMachineName
	I0924 19:13:15.699279   40358 buildroot.go:166] provisioning hostname "multinode-624105"
	I0924 19:13:15.699304   40358 main.go:141] libmachine: (multinode-624105) Calling .GetMachineName
	I0924 19:13:15.699491   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:13:15.702294   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.702849   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:13:15.702892   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.702991   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:13:15.703191   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:15.703331   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:15.703461   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:13:15.703660   40358 main.go:141] libmachine: Using SSH client type: native
	I0924 19:13:15.703872   40358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0924 19:13:15.703888   40358 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-624105 && echo "multinode-624105" | sudo tee /etc/hostname
	I0924 19:13:15.819337   40358 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-624105
	
	I0924 19:13:15.819376   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:13:15.822034   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.822396   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:13:15.822422   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.822614   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:13:15.822799   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:15.822944   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:15.823059   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:13:15.823211   40358 main.go:141] libmachine: Using SSH client type: native
	I0924 19:13:15.823373   40358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0924 19:13:15.823389   40358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-624105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-624105/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-624105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:13:15.927243   40358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:13:15.927284   40358 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:13:15.927309   40358 buildroot.go:174] setting up certificates
	I0924 19:13:15.927321   40358 provision.go:84] configureAuth start
	I0924 19:13:15.927332   40358 main.go:141] libmachine: (multinode-624105) Calling .GetMachineName
	I0924 19:13:15.927588   40358 main.go:141] libmachine: (multinode-624105) Calling .GetIP
	I0924 19:13:15.930204   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.930728   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:13:15.930758   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.930945   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:13:15.933185   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.933519   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:13:15.933550   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:15.933709   40358 provision.go:143] copyHostCerts
	I0924 19:13:15.933737   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:13:15.933764   40358 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:13:15.933773   40358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:13:15.933841   40358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:13:15.933916   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:13:15.933932   40358 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:13:15.933938   40358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:13:15.933961   40358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:13:15.934001   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:13:15.934017   40358 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:13:15.934023   40358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:13:15.934043   40358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:13:15.934089   40358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.multinode-624105 san=[127.0.0.1 192.168.39.206 localhost minikube multinode-624105]
	I0924 19:13:16.010522   40358 provision.go:177] copyRemoteCerts
	I0924 19:13:16.010579   40358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:13:16.010600   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:13:16.013131   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:16.013468   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:13:16.013500   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:16.013642   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:13:16.013805   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:16.013957   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:13:16.014110   40358 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/multinode-624105/id_rsa Username:docker}
	I0924 19:13:16.096120   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 19:13:16.096195   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:13:16.118408   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 19:13:16.118459   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0924 19:13:16.141550   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 19:13:16.141610   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:13:16.163790   40358 provision.go:87] duration metric: took 236.45809ms to configureAuth
	I0924 19:13:16.163813   40358 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:13:16.164008   40358 config.go:182] Loaded profile config "multinode-624105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:13:16.164076   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:13:16.166523   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:16.167118   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:13:16.167150   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:13:16.167310   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:13:16.167492   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:16.167645   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:13:16.167807   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:13:16.167947   40358 main.go:141] libmachine: Using SSH client type: native
	I0924 19:13:16.168133   40358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0924 19:13:16.168149   40358 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:14:46.869475   40358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:14:46.869526   40358 machine.go:96] duration metric: took 1m31.277957272s to provisionDockerMachine
	I0924 19:14:46.869542   40358 start.go:293] postStartSetup for "multinode-624105" (driver="kvm2")
	I0924 19:14:46.869565   40358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:14:46.869611   40358 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:14:46.869942   40358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:14:46.869977   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:14:46.873216   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:46.873638   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:14:46.873664   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:46.873805   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:14:46.873995   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:14:46.874159   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:14:46.874276   40358 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/multinode-624105/id_rsa Username:docker}
	I0924 19:14:46.956705   40358 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:14:46.960795   40358 command_runner.go:130] > NAME=Buildroot
	I0924 19:14:46.960810   40358 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0924 19:14:46.960815   40358 command_runner.go:130] > ID=buildroot
	I0924 19:14:46.960819   40358 command_runner.go:130] > VERSION_ID=2023.02.9
	I0924 19:14:46.960824   40358 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0924 19:14:46.960853   40358 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:14:46.960870   40358 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:14:46.960936   40358 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:14:46.961006   40358 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:14:46.961017   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /etc/ssl/certs/109492.pem
	I0924 19:14:46.961095   40358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:14:46.969889   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:14:46.992119   40358 start.go:296] duration metric: took 122.564038ms for postStartSetup
	I0924 19:14:46.992165   40358 fix.go:56] duration metric: took 1m31.421112791s for fixHost
	I0924 19:14:46.992196   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:14:46.995170   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:46.995557   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:14:46.995584   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:46.995743   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:14:46.995912   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:14:46.996058   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:14:46.996180   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:14:46.996403   40358 main.go:141] libmachine: Using SSH client type: native
	I0924 19:14:46.996614   40358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0924 19:14:46.996627   40358 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:14:47.099135   40358 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727205287.080707152
	
	I0924 19:14:47.099156   40358 fix.go:216] guest clock: 1727205287.080707152
	I0924 19:14:47.099164   40358 fix.go:229] Guest: 2024-09-24 19:14:47.080707152 +0000 UTC Remote: 2024-09-24 19:14:46.992174081 +0000 UTC m=+91.543141311 (delta=88.533071ms)
	I0924 19:14:47.099194   40358 fix.go:200] guest clock delta is within tolerance: 88.533071ms
	I0924 19:14:47.099200   40358 start.go:83] releasing machines lock for "multinode-624105", held for 1m31.528163017s
	I0924 19:14:47.099223   40358 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:14:47.099454   40358 main.go:141] libmachine: (multinode-624105) Calling .GetIP
	I0924 19:14:47.102316   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:47.102729   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:14:47.102759   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:47.102931   40358 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:14:47.103397   40358 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:14:47.103546   40358 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:14:47.103643   40358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:14:47.103687   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:14:47.103741   40358 ssh_runner.go:195] Run: cat /version.json
	I0924 19:14:47.103761   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:14:47.106181   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:47.106522   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:14:47.106549   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:47.106598   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:47.106721   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:14:47.106891   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:14:47.107034   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:14:47.107082   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:14:47.107102   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:47.107201   40358 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/multinode-624105/id_rsa Username:docker}
	I0924 19:14:47.107263   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:14:47.107398   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:14:47.107519   40358 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:14:47.107651   40358 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/multinode-624105/id_rsa Username:docker}
	I0924 19:14:47.205201   40358 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0924 19:14:47.205239   40358 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I0924 19:14:47.205392   40358 ssh_runner.go:195] Run: systemctl --version
	I0924 19:14:47.210822   40358 command_runner.go:130] > systemd 252 (252)
	I0924 19:14:47.210877   40358 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0924 19:14:47.210966   40358 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:14:47.358961   40358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0924 19:14:47.372489   40358 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0924 19:14:47.372553   40358 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:14:47.372607   40358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:14:47.383102   40358 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0924 19:14:47.383128   40358 start.go:495] detecting cgroup driver to use...
	I0924 19:14:47.383200   40358 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:14:47.401448   40358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:14:47.416827   40358 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:14:47.416890   40358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:14:47.432271   40358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:14:47.447001   40358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:14:47.594228   40358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:14:47.736944   40358 docker.go:233] disabling docker service ...
	I0924 19:14:47.737008   40358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:14:47.753225   40358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:14:47.766218   40358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:14:47.898567   40358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:14:48.034620   40358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:14:48.049035   40358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:14:48.065477   40358 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0924 19:14:48.065531   40358 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:14:48.065591   40358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:14:48.075363   40358 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:14:48.075419   40358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:14:48.091883   40358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:14:48.113599   40358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:14:48.133509   40358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:14:48.143720   40358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:14:48.153373   40358 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:14:48.163606   40358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:14:48.173262   40358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:14:48.181937   40358 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0924 19:14:48.182015   40358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:14:48.190769   40358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:14:48.333272   40358 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:14:48.519904   40358 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:14:48.519961   40358 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:14:48.524364   40358 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0924 19:14:48.524388   40358 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0924 19:14:48.524397   40358 command_runner.go:130] > Device: 0,22	Inode: 1301        Links: 1
	I0924 19:14:48.524408   40358 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0924 19:14:48.524417   40358 command_runner.go:130] > Access: 2024-09-24 19:14:48.400170230 +0000
	I0924 19:14:48.524428   40358 command_runner.go:130] > Modify: 2024-09-24 19:14:48.400170230 +0000
	I0924 19:14:48.524435   40358 command_runner.go:130] > Change: 2024-09-24 19:14:48.400170230 +0000
	I0924 19:14:48.524439   40358 command_runner.go:130] >  Birth: -
	I0924 19:14:48.524543   40358 start.go:563] Will wait 60s for crictl version
	I0924 19:14:48.524603   40358 ssh_runner.go:195] Run: which crictl
	I0924 19:14:48.527973   40358 command_runner.go:130] > /usr/bin/crictl
	I0924 19:14:48.528027   40358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:14:48.560409   40358 command_runner.go:130] > Version:  0.1.0
	I0924 19:14:48.560430   40358 command_runner.go:130] > RuntimeName:  cri-o
	I0924 19:14:48.560435   40358 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0924 19:14:48.560441   40358 command_runner.go:130] > RuntimeApiVersion:  v1
	I0924 19:14:48.561489   40358 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:14:48.561574   40358 ssh_runner.go:195] Run: crio --version
	I0924 19:14:48.590182   40358 command_runner.go:130] > crio version 1.29.1
	I0924 19:14:48.590203   40358 command_runner.go:130] > Version:        1.29.1
	I0924 19:14:48.590209   40358 command_runner.go:130] > GitCommit:      unknown
	I0924 19:14:48.590214   40358 command_runner.go:130] > GitCommitDate:  unknown
	I0924 19:14:48.590218   40358 command_runner.go:130] > GitTreeState:   clean
	I0924 19:14:48.590223   40358 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0924 19:14:48.590228   40358 command_runner.go:130] > GoVersion:      go1.21.6
	I0924 19:14:48.590232   40358 command_runner.go:130] > Compiler:       gc
	I0924 19:14:48.590235   40358 command_runner.go:130] > Platform:       linux/amd64
	I0924 19:14:48.590266   40358 command_runner.go:130] > Linkmode:       dynamic
	I0924 19:14:48.590273   40358 command_runner.go:130] > BuildTags:      
	I0924 19:14:48.590278   40358 command_runner.go:130] >   containers_image_ostree_stub
	I0924 19:14:48.590281   40358 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0924 19:14:48.590285   40358 command_runner.go:130] >   btrfs_noversion
	I0924 19:14:48.590292   40358 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0924 19:14:48.590296   40358 command_runner.go:130] >   libdm_no_deferred_remove
	I0924 19:14:48.590300   40358 command_runner.go:130] >   seccomp
	I0924 19:14:48.590303   40358 command_runner.go:130] > LDFlags:          unknown
	I0924 19:14:48.590310   40358 command_runner.go:130] > SeccompEnabled:   true
	I0924 19:14:48.590313   40358 command_runner.go:130] > AppArmorEnabled:  false
	I0924 19:14:48.591468   40358 ssh_runner.go:195] Run: crio --version
	I0924 19:14:48.621554   40358 command_runner.go:130] > crio version 1.29.1
	I0924 19:14:48.621580   40358 command_runner.go:130] > Version:        1.29.1
	I0924 19:14:48.621586   40358 command_runner.go:130] > GitCommit:      unknown
	I0924 19:14:48.621590   40358 command_runner.go:130] > GitCommitDate:  unknown
	I0924 19:14:48.621595   40358 command_runner.go:130] > GitTreeState:   clean
	I0924 19:14:48.621603   40358 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0924 19:14:48.621611   40358 command_runner.go:130] > GoVersion:      go1.21.6
	I0924 19:14:48.621617   40358 command_runner.go:130] > Compiler:       gc
	I0924 19:14:48.621626   40358 command_runner.go:130] > Platform:       linux/amd64
	I0924 19:14:48.621638   40358 command_runner.go:130] > Linkmode:       dynamic
	I0924 19:14:48.621645   40358 command_runner.go:130] > BuildTags:      
	I0924 19:14:48.621650   40358 command_runner.go:130] >   containers_image_ostree_stub
	I0924 19:14:48.621655   40358 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0924 19:14:48.621659   40358 command_runner.go:130] >   btrfs_noversion
	I0924 19:14:48.621665   40358 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0924 19:14:48.621669   40358 command_runner.go:130] >   libdm_no_deferred_remove
	I0924 19:14:48.621674   40358 command_runner.go:130] >   seccomp
	I0924 19:14:48.621677   40358 command_runner.go:130] > LDFlags:          unknown
	I0924 19:14:48.621682   40358 command_runner.go:130] > SeccompEnabled:   true
	I0924 19:14:48.621686   40358 command_runner.go:130] > AppArmorEnabled:  false
	I0924 19:14:48.624655   40358 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:14:48.625868   40358 main.go:141] libmachine: (multinode-624105) Calling .GetIP
	I0924 19:14:48.628401   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:48.628889   40358 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:14:48.628911   40358 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:14:48.629276   40358 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 19:14:48.633069   40358 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0924 19:14:48.633251   40358 kubeadm.go:883] updating cluster {Name:multinode-624105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-624105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.64 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:14:48.633386   40358 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:14:48.633429   40358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:14:48.669706   40358 command_runner.go:130] > {
	I0924 19:14:48.669726   40358 command_runner.go:130] >   "images": [
	I0924 19:14:48.669730   40358 command_runner.go:130] >     {
	I0924 19:14:48.669738   40358 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0924 19:14:48.669742   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.669748   40358 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0924 19:14:48.669752   40358 command_runner.go:130] >       ],
	I0924 19:14:48.669756   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.669766   40358 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0924 19:14:48.669773   40358 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0924 19:14:48.669777   40358 command_runner.go:130] >       ],
	I0924 19:14:48.669781   40358 command_runner.go:130] >       "size": "87190579",
	I0924 19:14:48.669788   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.669792   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.669799   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.669805   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.669810   40358 command_runner.go:130] >     },
	I0924 19:14:48.669815   40358 command_runner.go:130] >     {
	I0924 19:14:48.669820   40358 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0924 19:14:48.669828   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.669838   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0924 19:14:48.669847   40358 command_runner.go:130] >       ],
	I0924 19:14:48.669852   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.669867   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0924 19:14:48.669882   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0924 19:14:48.669888   40358 command_runner.go:130] >       ],
	I0924 19:14:48.669893   40358 command_runner.go:130] >       "size": "1363676",
	I0924 19:14:48.669899   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.669916   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.669926   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.669932   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.669942   40358 command_runner.go:130] >     },
	I0924 19:14:48.669947   40358 command_runner.go:130] >     {
	I0924 19:14:48.669960   40358 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0924 19:14:48.669969   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.669981   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0924 19:14:48.669989   40358 command_runner.go:130] >       ],
	I0924 19:14:48.669996   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.670010   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0924 19:14:48.670021   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0924 19:14:48.670027   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670032   40358 command_runner.go:130] >       "size": "31470524",
	I0924 19:14:48.670040   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.670049   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.670056   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.670068   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.670077   40358 command_runner.go:130] >     },
	I0924 19:14:48.670085   40358 command_runner.go:130] >     {
	I0924 19:14:48.670096   40358 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0924 19:14:48.670105   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.670116   40358 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0924 19:14:48.670124   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670128   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.670143   40358 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0924 19:14:48.670164   40358 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0924 19:14:48.670173   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670184   40358 command_runner.go:130] >       "size": "63273227",
	I0924 19:14:48.670193   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.670202   40358 command_runner.go:130] >       "username": "nonroot",
	I0924 19:14:48.670212   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.670220   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.670226   40358 command_runner.go:130] >     },
	I0924 19:14:48.670230   40358 command_runner.go:130] >     {
	I0924 19:14:48.670242   40358 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0924 19:14:48.670251   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.670262   40358 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0924 19:14:48.670270   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670282   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.670295   40358 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0924 19:14:48.670309   40358 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0924 19:14:48.670318   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670323   40358 command_runner.go:130] >       "size": "149009664",
	I0924 19:14:48.670331   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.670338   40358 command_runner.go:130] >         "value": "0"
	I0924 19:14:48.670347   40358 command_runner.go:130] >       },
	I0924 19:14:48.670356   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.670366   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.670376   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.670384   40358 command_runner.go:130] >     },
	I0924 19:14:48.670393   40358 command_runner.go:130] >     {
	I0924 19:14:48.670406   40358 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0924 19:14:48.670414   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.670423   40358 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0924 19:14:48.670428   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670437   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.670451   40358 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0924 19:14:48.670468   40358 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0924 19:14:48.670476   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670486   40358 command_runner.go:130] >       "size": "95237600",
	I0924 19:14:48.670495   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.670505   40358 command_runner.go:130] >         "value": "0"
	I0924 19:14:48.670512   40358 command_runner.go:130] >       },
	I0924 19:14:48.670519   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.670524   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.670533   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.670541   40358 command_runner.go:130] >     },
	I0924 19:14:48.670547   40358 command_runner.go:130] >     {
	I0924 19:14:48.670566   40358 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0924 19:14:48.670575   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.670584   40358 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0924 19:14:48.670593   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670600   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.670613   40358 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0924 19:14:48.670625   40358 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0924 19:14:48.670634   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670643   40358 command_runner.go:130] >       "size": "89437508",
	I0924 19:14:48.670648   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.670656   40358 command_runner.go:130] >         "value": "0"
	I0924 19:14:48.670663   40358 command_runner.go:130] >       },
	I0924 19:14:48.670672   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.670678   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.670687   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.670693   40358 command_runner.go:130] >     },
	I0924 19:14:48.670701   40358 command_runner.go:130] >     {
	I0924 19:14:48.670711   40358 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0924 19:14:48.670719   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.670724   40358 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0924 19:14:48.670728   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670733   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.670749   40358 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0924 19:14:48.670758   40358 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0924 19:14:48.670762   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670766   40358 command_runner.go:130] >       "size": "92733849",
	I0924 19:14:48.670772   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.670780   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.670786   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.670800   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.670806   40358 command_runner.go:130] >     },
	I0924 19:14:48.670811   40358 command_runner.go:130] >     {
	I0924 19:14:48.670820   40358 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0924 19:14:48.670837   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.670845   40358 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0924 19:14:48.670850   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670854   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.670861   40358 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0924 19:14:48.670873   40358 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0924 19:14:48.670877   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670882   40358 command_runner.go:130] >       "size": "68420934",
	I0924 19:14:48.670885   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.670889   40358 command_runner.go:130] >         "value": "0"
	I0924 19:14:48.670892   40358 command_runner.go:130] >       },
	I0924 19:14:48.670896   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.670899   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.670903   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.670907   40358 command_runner.go:130] >     },
	I0924 19:14:48.670909   40358 command_runner.go:130] >     {
	I0924 19:14:48.670915   40358 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0924 19:14:48.670919   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.670923   40358 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0924 19:14:48.670927   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670930   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.670938   40358 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0924 19:14:48.670945   40358 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0924 19:14:48.670949   40358 command_runner.go:130] >       ],
	I0924 19:14:48.670955   40358 command_runner.go:130] >       "size": "742080",
	I0924 19:14:48.670958   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.670962   40358 command_runner.go:130] >         "value": "65535"
	I0924 19:14:48.670968   40358 command_runner.go:130] >       },
	I0924 19:14:48.670972   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.670978   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.670982   40358 command_runner.go:130] >       "pinned": true
	I0924 19:14:48.670987   40358 command_runner.go:130] >     }
	I0924 19:14:48.670990   40358 command_runner.go:130] >   ]
	I0924 19:14:48.670995   40358 command_runner.go:130] > }
	I0924 19:14:48.671176   40358 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 19:14:48.671187   40358 crio.go:433] Images already preloaded, skipping extraction
	I0924 19:14:48.671235   40358 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:14:48.705211   40358 command_runner.go:130] > {
	I0924 19:14:48.705232   40358 command_runner.go:130] >   "images": [
	I0924 19:14:48.705253   40358 command_runner.go:130] >     {
	I0924 19:14:48.705262   40358 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0924 19:14:48.705266   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.705272   40358 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0924 19:14:48.705275   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705281   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.705293   40358 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0924 19:14:48.705307   40358 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0924 19:14:48.705315   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705323   40358 command_runner.go:130] >       "size": "87190579",
	I0924 19:14:48.705333   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.705339   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.705348   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.705354   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.705359   40358 command_runner.go:130] >     },
	I0924 19:14:48.705363   40358 command_runner.go:130] >     {
	I0924 19:14:48.705369   40358 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0924 19:14:48.705376   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.705385   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0924 19:14:48.705393   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705400   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.705415   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0924 19:14:48.705429   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0924 19:14:48.705438   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705446   40358 command_runner.go:130] >       "size": "1363676",
	I0924 19:14:48.705453   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.705460   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.705466   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.705472   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.705480   40358 command_runner.go:130] >     },
	I0924 19:14:48.705489   40358 command_runner.go:130] >     {
	I0924 19:14:48.705502   40358 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0924 19:14:48.705512   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.705523   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0924 19:14:48.705531   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705539   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.705547   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0924 19:14:48.705560   40358 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0924 19:14:48.705569   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705581   40358 command_runner.go:130] >       "size": "31470524",
	I0924 19:14:48.705591   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.705607   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.705618   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.705628   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.705635   40358 command_runner.go:130] >     },
	I0924 19:14:48.705639   40358 command_runner.go:130] >     {
	I0924 19:14:48.705650   40358 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0924 19:14:48.705660   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.705669   40358 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0924 19:14:48.705678   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705687   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.705701   40358 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0924 19:14:48.705719   40358 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0924 19:14:48.705727   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705736   40358 command_runner.go:130] >       "size": "63273227",
	I0924 19:14:48.705745   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.705755   40358 command_runner.go:130] >       "username": "nonroot",
	I0924 19:14:48.705768   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.705777   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.705786   40358 command_runner.go:130] >     },
	I0924 19:14:48.705795   40358 command_runner.go:130] >     {
	I0924 19:14:48.705808   40358 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0924 19:14:48.705817   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.705826   40358 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0924 19:14:48.705832   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705838   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.705851   40358 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0924 19:14:48.705865   40358 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0924 19:14:48.705873   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705883   40358 command_runner.go:130] >       "size": "149009664",
	I0924 19:14:48.705892   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.705901   40358 command_runner.go:130] >         "value": "0"
	I0924 19:14:48.705909   40358 command_runner.go:130] >       },
	I0924 19:14:48.705918   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.705927   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.705935   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.705938   40358 command_runner.go:130] >     },
	I0924 19:14:48.705946   40358 command_runner.go:130] >     {
	I0924 19:14:48.705956   40358 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0924 19:14:48.705965   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.705974   40358 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0924 19:14:48.705984   40358 command_runner.go:130] >       ],
	I0924 19:14:48.705993   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.706006   40358 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0924 19:14:48.706021   40358 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0924 19:14:48.706029   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706035   40358 command_runner.go:130] >       "size": "95237600",
	I0924 19:14:48.706042   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.706047   40358 command_runner.go:130] >         "value": "0"
	I0924 19:14:48.706055   40358 command_runner.go:130] >       },
	I0924 19:14:48.706064   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.706072   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.706082   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.706090   40358 command_runner.go:130] >     },
	I0924 19:14:48.706097   40358 command_runner.go:130] >     {
	I0924 19:14:48.706109   40358 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0924 19:14:48.706118   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.706129   40358 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0924 19:14:48.706136   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706140   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.706154   40358 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0924 19:14:48.706169   40358 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0924 19:14:48.706181   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706191   40358 command_runner.go:130] >       "size": "89437508",
	I0924 19:14:48.706200   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.706209   40358 command_runner.go:130] >         "value": "0"
	I0924 19:14:48.706218   40358 command_runner.go:130] >       },
	I0924 19:14:48.706225   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.706233   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.706238   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.706244   40358 command_runner.go:130] >     },
	I0924 19:14:48.706250   40358 command_runner.go:130] >     {
	I0924 19:14:48.706263   40358 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0924 19:14:48.706273   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.706284   40358 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0924 19:14:48.706293   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706301   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.706322   40358 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0924 19:14:48.706334   40358 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0924 19:14:48.706339   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706349   40358 command_runner.go:130] >       "size": "92733849",
	I0924 19:14:48.706358   40358 command_runner.go:130] >       "uid": null,
	I0924 19:14:48.706367   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.706375   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.706383   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.706390   40358 command_runner.go:130] >     },
	I0924 19:14:48.706399   40358 command_runner.go:130] >     {
	I0924 19:14:48.706408   40358 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0924 19:14:48.706417   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.706427   40358 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0924 19:14:48.706436   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706443   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.706461   40358 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0924 19:14:48.706475   40358 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0924 19:14:48.706483   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706491   40358 command_runner.go:130] >       "size": "68420934",
	I0924 19:14:48.706500   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.706507   40358 command_runner.go:130] >         "value": "0"
	I0924 19:14:48.706515   40358 command_runner.go:130] >       },
	I0924 19:14:48.706522   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.706530   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.706536   40358 command_runner.go:130] >       "pinned": false
	I0924 19:14:48.706541   40358 command_runner.go:130] >     },
	I0924 19:14:48.706548   40358 command_runner.go:130] >     {
	I0924 19:14:48.706558   40358 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0924 19:14:48.706567   40358 command_runner.go:130] >       "repoTags": [
	I0924 19:14:48.706576   40358 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0924 19:14:48.706585   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706591   40358 command_runner.go:130] >       "repoDigests": [
	I0924 19:14:48.706615   40358 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0924 19:14:48.706632   40358 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0924 19:14:48.706642   40358 command_runner.go:130] >       ],
	I0924 19:14:48.706652   40358 command_runner.go:130] >       "size": "742080",
	I0924 19:14:48.706661   40358 command_runner.go:130] >       "uid": {
	I0924 19:14:48.706671   40358 command_runner.go:130] >         "value": "65535"
	I0924 19:14:48.706679   40358 command_runner.go:130] >       },
	I0924 19:14:48.706685   40358 command_runner.go:130] >       "username": "",
	I0924 19:14:48.706694   40358 command_runner.go:130] >       "spec": null,
	I0924 19:14:48.706703   40358 command_runner.go:130] >       "pinned": true
	I0924 19:14:48.706708   40358 command_runner.go:130] >     }
	I0924 19:14:48.706711   40358 command_runner.go:130] >   ]
	I0924 19:14:48.706714   40358 command_runner.go:130] > }
	I0924 19:14:48.706911   40358 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 19:14:48.706926   40358 cache_images.go:84] Images are preloaded, skipping loading
	I0924 19:14:48.706942   40358 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.31.1 crio true true} ...
	I0924 19:14:48.707069   40358 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-624105 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-624105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:14:48.707146   40358 ssh_runner.go:195] Run: crio config
	I0924 19:14:48.736190   40358 command_runner.go:130] ! time="2024-09-24 19:14:48.717719834Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0924 19:14:48.741422   40358 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0924 19:14:48.752004   40358 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0924 19:14:48.752022   40358 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0924 19:14:48.752028   40358 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0924 19:14:48.752032   40358 command_runner.go:130] > #
	I0924 19:14:48.752042   40358 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0924 19:14:48.752049   40358 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0924 19:14:48.752057   40358 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0924 19:14:48.752066   40358 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0924 19:14:48.752073   40358 command_runner.go:130] > # reload'.
	I0924 19:14:48.752084   40358 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0924 19:14:48.752095   40358 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0924 19:14:48.752107   40358 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0924 19:14:48.752120   40358 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0924 19:14:48.752142   40358 command_runner.go:130] > [crio]
	I0924 19:14:48.752155   40358 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0924 19:14:48.752161   40358 command_runner.go:130] > # containers images, in this directory.
	I0924 19:14:48.752165   40358 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0924 19:14:48.752175   40358 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0924 19:14:48.752182   40358 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0924 19:14:48.752190   40358 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0924 19:14:48.752196   40358 command_runner.go:130] > # imagestore = ""
	I0924 19:14:48.752202   40358 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0924 19:14:48.752210   40358 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0924 19:14:48.752214   40358 command_runner.go:130] > storage_driver = "overlay"
	I0924 19:14:48.752221   40358 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0924 19:14:48.752231   40358 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0924 19:14:48.752237   40358 command_runner.go:130] > storage_option = [
	I0924 19:14:48.752241   40358 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0924 19:14:48.752246   40358 command_runner.go:130] > ]
	I0924 19:14:48.752253   40358 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0924 19:14:48.752261   40358 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0924 19:14:48.752268   40358 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0924 19:14:48.752273   40358 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0924 19:14:48.752281   40358 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0924 19:14:48.752286   40358 command_runner.go:130] > # always happen on a node reboot
	I0924 19:14:48.752291   40358 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0924 19:14:48.752302   40358 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0924 19:14:48.752309   40358 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0924 19:14:48.752317   40358 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0924 19:14:48.752322   40358 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0924 19:14:48.752331   40358 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0924 19:14:48.752339   40358 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0924 19:14:48.752345   40358 command_runner.go:130] > # internal_wipe = true
	I0924 19:14:48.752354   40358 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0924 19:14:48.752361   40358 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0924 19:14:48.752365   40358 command_runner.go:130] > # internal_repair = false
	I0924 19:14:48.752370   40358 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0924 19:14:48.752377   40358 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0924 19:14:48.752384   40358 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0924 19:14:48.752389   40358 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0924 19:14:48.752400   40358 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0924 19:14:48.752405   40358 command_runner.go:130] > [crio.api]
	I0924 19:14:48.752411   40358 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0924 19:14:48.752418   40358 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0924 19:14:48.752423   40358 command_runner.go:130] > # IP address on which the stream server will listen.
	I0924 19:14:48.752429   40358 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0924 19:14:48.752436   40358 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0924 19:14:48.752443   40358 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0924 19:14:48.752447   40358 command_runner.go:130] > # stream_port = "0"
	I0924 19:14:48.752455   40358 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0924 19:14:48.752464   40358 command_runner.go:130] > # stream_enable_tls = false
	I0924 19:14:48.752478   40358 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0924 19:14:48.752488   40358 command_runner.go:130] > # stream_idle_timeout = ""
	I0924 19:14:48.752500   40358 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0924 19:14:48.752512   40358 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0924 19:14:48.752521   40358 command_runner.go:130] > # minutes.
	I0924 19:14:48.752528   40358 command_runner.go:130] > # stream_tls_cert = ""
	I0924 19:14:48.752534   40358 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0924 19:14:48.752544   40358 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0924 19:14:48.752550   40358 command_runner.go:130] > # stream_tls_key = ""
	I0924 19:14:48.752556   40358 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0924 19:14:48.752563   40358 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0924 19:14:48.752577   40358 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0924 19:14:48.752583   40358 command_runner.go:130] > # stream_tls_ca = ""
	I0924 19:14:48.752590   40358 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0924 19:14:48.752597   40358 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0924 19:14:48.752605   40358 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0924 19:14:48.752611   40358 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0924 19:14:48.752617   40358 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0924 19:14:48.752624   40358 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0924 19:14:48.752628   40358 command_runner.go:130] > [crio.runtime]
	I0924 19:14:48.752637   40358 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0924 19:14:48.752645   40358 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0924 19:14:48.752650   40358 command_runner.go:130] > # "nofile=1024:2048"
	I0924 19:14:48.752656   40358 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0924 19:14:48.752662   40358 command_runner.go:130] > # default_ulimits = [
	I0924 19:14:48.752665   40358 command_runner.go:130] > # ]
	I0924 19:14:48.752674   40358 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0924 19:14:48.752678   40358 command_runner.go:130] > # no_pivot = false
	I0924 19:14:48.752687   40358 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0924 19:14:48.752695   40358 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0924 19:14:48.752702   40358 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0924 19:14:48.752707   40358 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0924 19:14:48.752714   40358 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0924 19:14:48.752720   40358 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0924 19:14:48.752726   40358 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0924 19:14:48.752731   40358 command_runner.go:130] > # Cgroup setting for conmon
	I0924 19:14:48.752739   40358 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0924 19:14:48.752746   40358 command_runner.go:130] > conmon_cgroup = "pod"
	I0924 19:14:48.752751   40358 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0924 19:14:48.752758   40358 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0924 19:14:48.752764   40358 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0924 19:14:48.752770   40358 command_runner.go:130] > conmon_env = [
	I0924 19:14:48.752776   40358 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0924 19:14:48.752781   40358 command_runner.go:130] > ]
	I0924 19:14:48.752786   40358 command_runner.go:130] > # Additional environment variables to set for all the
	I0924 19:14:48.752793   40358 command_runner.go:130] > # containers. These are overridden if set in the
	I0924 19:14:48.752799   40358 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0924 19:14:48.752804   40358 command_runner.go:130] > # default_env = [
	I0924 19:14:48.752811   40358 command_runner.go:130] > # ]
	I0924 19:14:48.752819   40358 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0924 19:14:48.752828   40358 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0924 19:14:48.752834   40358 command_runner.go:130] > # selinux = false
	I0924 19:14:48.752840   40358 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0924 19:14:48.752848   40358 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0924 19:14:48.752853   40358 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0924 19:14:48.752859   40358 command_runner.go:130] > # seccomp_profile = ""
	I0924 19:14:48.752865   40358 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0924 19:14:48.752872   40358 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0924 19:14:48.752885   40358 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0924 19:14:48.752891   40358 command_runner.go:130] > # which might increase security.
	I0924 19:14:48.752896   40358 command_runner.go:130] > # This option is currently deprecated,
	I0924 19:14:48.752903   40358 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0924 19:14:48.752910   40358 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0924 19:14:48.752916   40358 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0924 19:14:48.752924   40358 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0924 19:14:48.752934   40358 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0924 19:14:48.752943   40358 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0924 19:14:48.752949   40358 command_runner.go:130] > # This option supports live configuration reload.
	I0924 19:14:48.752954   40358 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0924 19:14:48.752961   40358 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0924 19:14:48.752966   40358 command_runner.go:130] > # the cgroup blockio controller.
	I0924 19:14:48.752972   40358 command_runner.go:130] > # blockio_config_file = ""
	I0924 19:14:48.752979   40358 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0924 19:14:48.752985   40358 command_runner.go:130] > # blockio parameters.
	I0924 19:14:48.752989   40358 command_runner.go:130] > # blockio_reload = false
	I0924 19:14:48.752997   40358 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0924 19:14:48.753002   40358 command_runner.go:130] > # irqbalance daemon.
	I0924 19:14:48.753007   40358 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0924 19:14:48.753015   40358 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0924 19:14:48.753021   40358 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0924 19:14:48.753030   40358 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0924 19:14:48.753038   40358 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0924 19:14:48.753046   40358 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0924 19:14:48.753054   40358 command_runner.go:130] > # This option supports live configuration reload.
	I0924 19:14:48.753057   40358 command_runner.go:130] > # rdt_config_file = ""
	I0924 19:14:48.753062   40358 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0924 19:14:48.753068   40358 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0924 19:14:48.753083   40358 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0924 19:14:48.753089   40358 command_runner.go:130] > # separate_pull_cgroup = ""
	I0924 19:14:48.753095   40358 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0924 19:14:48.753103   40358 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0924 19:14:48.753107   40358 command_runner.go:130] > # will be added.
	I0924 19:14:48.753113   40358 command_runner.go:130] > # default_capabilities = [
	I0924 19:14:48.753117   40358 command_runner.go:130] > # 	"CHOWN",
	I0924 19:14:48.753123   40358 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0924 19:14:48.753127   40358 command_runner.go:130] > # 	"FSETID",
	I0924 19:14:48.753132   40358 command_runner.go:130] > # 	"FOWNER",
	I0924 19:14:48.753136   40358 command_runner.go:130] > # 	"SETGID",
	I0924 19:14:48.753141   40358 command_runner.go:130] > # 	"SETUID",
	I0924 19:14:48.753145   40358 command_runner.go:130] > # 	"SETPCAP",
	I0924 19:14:48.753151   40358 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0924 19:14:48.753155   40358 command_runner.go:130] > # 	"KILL",
	I0924 19:14:48.753160   40358 command_runner.go:130] > # ]
	I0924 19:14:48.753167   40358 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0924 19:14:48.753176   40358 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0924 19:14:48.753185   40358 command_runner.go:130] > # add_inheritable_capabilities = false
	I0924 19:14:48.753193   40358 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0924 19:14:48.753201   40358 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0924 19:14:48.753205   40358 command_runner.go:130] > default_sysctls = [
	I0924 19:14:48.753210   40358 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0924 19:14:48.753215   40358 command_runner.go:130] > ]
	I0924 19:14:48.753220   40358 command_runner.go:130] > # List of devices on the host that a
	I0924 19:14:48.753228   40358 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0924 19:14:48.753232   40358 command_runner.go:130] > # allowed_devices = [
	I0924 19:14:48.753238   40358 command_runner.go:130] > # 	"/dev/fuse",
	I0924 19:14:48.753241   40358 command_runner.go:130] > # ]
	I0924 19:14:48.753248   40358 command_runner.go:130] > # List of additional devices. specified as
	I0924 19:14:48.753255   40358 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0924 19:14:48.753262   40358 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0924 19:14:48.753268   40358 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0924 19:14:48.753274   40358 command_runner.go:130] > # additional_devices = [
	I0924 19:14:48.753277   40358 command_runner.go:130] > # ]
	I0924 19:14:48.753284   40358 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0924 19:14:48.753288   40358 command_runner.go:130] > # cdi_spec_dirs = [
	I0924 19:14:48.753294   40358 command_runner.go:130] > # 	"/etc/cdi",
	I0924 19:14:48.753298   40358 command_runner.go:130] > # 	"/var/run/cdi",
	I0924 19:14:48.753303   40358 command_runner.go:130] > # ]
	I0924 19:14:48.753309   40358 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0924 19:14:48.753317   40358 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0924 19:14:48.753323   40358 command_runner.go:130] > # Defaults to false.
	I0924 19:14:48.753328   40358 command_runner.go:130] > # device_ownership_from_security_context = false
	I0924 19:14:48.753336   40358 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0924 19:14:48.753344   40358 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0924 19:14:48.753348   40358 command_runner.go:130] > # hooks_dir = [
	I0924 19:14:48.753353   40358 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0924 19:14:48.753356   40358 command_runner.go:130] > # ]
	I0924 19:14:48.753362   40358 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0924 19:14:48.753370   40358 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0924 19:14:48.753377   40358 command_runner.go:130] > # its default mounts from the following two files:
	I0924 19:14:48.753381   40358 command_runner.go:130] > #
	I0924 19:14:48.753387   40358 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0924 19:14:48.753395   40358 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0924 19:14:48.753403   40358 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0924 19:14:48.753408   40358 command_runner.go:130] > #
	I0924 19:14:48.753414   40358 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0924 19:14:48.753422   40358 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0924 19:14:48.753428   40358 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0924 19:14:48.753437   40358 command_runner.go:130] > #      only add mounts it finds in this file.
	I0924 19:14:48.753443   40358 command_runner.go:130] > #
	I0924 19:14:48.753447   40358 command_runner.go:130] > # default_mounts_file = ""
	I0924 19:14:48.753454   40358 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0924 19:14:48.753463   40358 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0924 19:14:48.753472   40358 command_runner.go:130] > pids_limit = 1024
	I0924 19:14:48.753484   40358 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0924 19:14:48.753495   40358 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0924 19:14:48.753507   40358 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0924 19:14:48.753522   40358 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0924 19:14:48.753531   40358 command_runner.go:130] > # log_size_max = -1
	I0924 19:14:48.753542   40358 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0924 19:14:48.753549   40358 command_runner.go:130] > # log_to_journald = false
	I0924 19:14:48.753555   40358 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0924 19:14:48.753560   40358 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0924 19:14:48.753567   40358 command_runner.go:130] > # Path to directory for container attach sockets.
	I0924 19:14:48.753571   40358 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0924 19:14:48.753579   40358 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0924 19:14:48.753583   40358 command_runner.go:130] > # bind_mount_prefix = ""
	I0924 19:14:48.753588   40358 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0924 19:14:48.753594   40358 command_runner.go:130] > # read_only = false
	I0924 19:14:48.753600   40358 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0924 19:14:48.753608   40358 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0924 19:14:48.753614   40358 command_runner.go:130] > # live configuration reload.
	I0924 19:14:48.753619   40358 command_runner.go:130] > # log_level = "info"
	I0924 19:14:48.753626   40358 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0924 19:14:48.753633   40358 command_runner.go:130] > # This option supports live configuration reload.
	I0924 19:14:48.753640   40358 command_runner.go:130] > # log_filter = ""
	I0924 19:14:48.753646   40358 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0924 19:14:48.753654   40358 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0924 19:14:48.753660   40358 command_runner.go:130] > # separated by comma.
	I0924 19:14:48.753667   40358 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0924 19:14:48.753673   40358 command_runner.go:130] > # uid_mappings = ""
	I0924 19:14:48.753680   40358 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0924 19:14:48.753688   40358 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0924 19:14:48.753694   40358 command_runner.go:130] > # separated by comma.
	I0924 19:14:48.753702   40358 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0924 19:14:48.753711   40358 command_runner.go:130] > # gid_mappings = ""
	I0924 19:14:48.753720   40358 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0924 19:14:48.753727   40358 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0924 19:14:48.753736   40358 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0924 19:14:48.753743   40358 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0924 19:14:48.753750   40358 command_runner.go:130] > # minimum_mappable_uid = -1
	I0924 19:14:48.753755   40358 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0924 19:14:48.753765   40358 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0924 19:14:48.753774   40358 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0924 19:14:48.753783   40358 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0924 19:14:48.753787   40358 command_runner.go:130] > # minimum_mappable_gid = -1
	I0924 19:14:48.753795   40358 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0924 19:14:48.753803   40358 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0924 19:14:48.753812   40358 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0924 19:14:48.753818   40358 command_runner.go:130] > # ctr_stop_timeout = 30
	I0924 19:14:48.753824   40358 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0924 19:14:48.753830   40358 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0924 19:14:48.753837   40358 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0924 19:14:48.753844   40358 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0924 19:14:48.753848   40358 command_runner.go:130] > drop_infra_ctr = false
	I0924 19:14:48.753856   40358 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0924 19:14:48.753863   40358 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0924 19:14:48.753870   40358 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0924 19:14:48.753876   40358 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0924 19:14:48.753887   40358 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0924 19:14:48.753894   40358 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0924 19:14:48.753902   40358 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0924 19:14:48.753907   40358 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0924 19:14:48.753913   40358 command_runner.go:130] > # shared_cpuset = ""
	I0924 19:14:48.753920   40358 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0924 19:14:48.753928   40358 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0924 19:14:48.753932   40358 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0924 19:14:48.753941   40358 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0924 19:14:48.753947   40358 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0924 19:14:48.753953   40358 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0924 19:14:48.753963   40358 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0924 19:14:48.753970   40358 command_runner.go:130] > # enable_criu_support = false
	I0924 19:14:48.753975   40358 command_runner.go:130] > # Enable/disable the generation of the container,
	I0924 19:14:48.753983   40358 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0924 19:14:48.753989   40358 command_runner.go:130] > # enable_pod_events = false
	I0924 19:14:48.753997   40358 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0924 19:14:48.754005   40358 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0924 19:14:48.754012   40358 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0924 19:14:48.754016   40358 command_runner.go:130] > # default_runtime = "runc"
	I0924 19:14:48.754022   40358 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0924 19:14:48.754031   40358 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0924 19:14:48.754042   40358 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0924 19:14:48.754049   40358 command_runner.go:130] > # creation as a file is not desired either.
	I0924 19:14:48.754057   40358 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0924 19:14:48.754064   40358 command_runner.go:130] > # the hostname is being managed dynamically.
	I0924 19:14:48.754068   40358 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0924 19:14:48.754074   40358 command_runner.go:130] > # ]
	I0924 19:14:48.754081   40358 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0924 19:14:48.754090   40358 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0924 19:14:48.754095   40358 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0924 19:14:48.754103   40358 command_runner.go:130] > # Each entry in the table should follow the format:
	I0924 19:14:48.754108   40358 command_runner.go:130] > #
	I0924 19:14:48.754113   40358 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0924 19:14:48.754120   40358 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0924 19:14:48.754139   40358 command_runner.go:130] > # runtime_type = "oci"
	I0924 19:14:48.754146   40358 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0924 19:14:48.754151   40358 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0924 19:14:48.754158   40358 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0924 19:14:48.754162   40358 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0924 19:14:48.754169   40358 command_runner.go:130] > # monitor_env = []
	I0924 19:14:48.754173   40358 command_runner.go:130] > # privileged_without_host_devices = false
	I0924 19:14:48.754180   40358 command_runner.go:130] > # allowed_annotations = []
	I0924 19:14:48.754185   40358 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0924 19:14:48.754190   40358 command_runner.go:130] > # Where:
	I0924 19:14:48.754196   40358 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0924 19:14:48.754203   40358 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0924 19:14:48.754211   40358 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0924 19:14:48.754217   40358 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0924 19:14:48.754225   40358 command_runner.go:130] > #   in $PATH.
	I0924 19:14:48.754232   40358 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0924 19:14:48.754237   40358 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0924 19:14:48.754245   40358 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0924 19:14:48.754249   40358 command_runner.go:130] > #   state.
	I0924 19:14:48.754255   40358 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0924 19:14:48.754262   40358 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0924 19:14:48.754268   40358 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0924 19:14:48.754276   40358 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0924 19:14:48.754281   40358 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0924 19:14:48.754289   40358 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0924 19:14:48.754295   40358 command_runner.go:130] > #   The currently recognized values are:
	I0924 19:14:48.754303   40358 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0924 19:14:48.754312   40358 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0924 19:14:48.754320   40358 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0924 19:14:48.754328   40358 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0924 19:14:48.754335   40358 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0924 19:14:48.754343   40358 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0924 19:14:48.754352   40358 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0924 19:14:48.754358   40358 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0924 19:14:48.754366   40358 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0924 19:14:48.754374   40358 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0924 19:14:48.754380   40358 command_runner.go:130] > #   deprecated option "conmon".
	I0924 19:14:48.754389   40358 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0924 19:14:48.754394   40358 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0924 19:14:48.754402   40358 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0924 19:14:48.754408   40358 command_runner.go:130] > #   should be moved to the container's cgroup
	I0924 19:14:48.754415   40358 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0924 19:14:48.754421   40358 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0924 19:14:48.754427   40358 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0924 19:14:48.754434   40358 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0924 19:14:48.754437   40358 command_runner.go:130] > #
	I0924 19:14:48.754442   40358 command_runner.go:130] > # Using the seccomp notifier feature:
	I0924 19:14:48.754449   40358 command_runner.go:130] > #
	I0924 19:14:48.754455   40358 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0924 19:14:48.754467   40358 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0924 19:14:48.754475   40358 command_runner.go:130] > #
	I0924 19:14:48.754483   40358 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0924 19:14:48.754495   40358 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0924 19:14:48.754502   40358 command_runner.go:130] > #
	I0924 19:14:48.754511   40358 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0924 19:14:48.754520   40358 command_runner.go:130] > # feature.
	I0924 19:14:48.754527   40358 command_runner.go:130] > #
	I0924 19:14:48.754533   40358 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0924 19:14:48.754540   40358 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0924 19:14:48.754548   40358 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0924 19:14:48.754555   40358 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0924 19:14:48.754561   40358 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0924 19:14:48.754564   40358 command_runner.go:130] > #
	I0924 19:14:48.754570   40358 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0924 19:14:48.754576   40358 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0924 19:14:48.754582   40358 command_runner.go:130] > #
	I0924 19:14:48.754587   40358 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0924 19:14:48.754594   40358 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0924 19:14:48.754597   40358 command_runner.go:130] > #
	I0924 19:14:48.754604   40358 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0924 19:14:48.754612   40358 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0924 19:14:48.754618   40358 command_runner.go:130] > # limitation.
	I0924 19:14:48.754623   40358 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0924 19:14:48.754631   40358 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0924 19:14:48.754637   40358 command_runner.go:130] > runtime_type = "oci"
	I0924 19:14:48.754642   40358 command_runner.go:130] > runtime_root = "/run/runc"
	I0924 19:14:48.754648   40358 command_runner.go:130] > runtime_config_path = ""
	I0924 19:14:48.754652   40358 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0924 19:14:48.754657   40358 command_runner.go:130] > monitor_cgroup = "pod"
	I0924 19:14:48.754663   40358 command_runner.go:130] > monitor_exec_cgroup = ""
	I0924 19:14:48.754667   40358 command_runner.go:130] > monitor_env = [
	I0924 19:14:48.754675   40358 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0924 19:14:48.754677   40358 command_runner.go:130] > ]
	I0924 19:14:48.754682   40358 command_runner.go:130] > privileged_without_host_devices = false
	I0924 19:14:48.754690   40358 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0924 19:14:48.754698   40358 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0924 19:14:48.754704   40358 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0924 19:14:48.754713   40358 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0924 19:14:48.754725   40358 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0924 19:14:48.754734   40358 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0924 19:14:48.754742   40358 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0924 19:14:48.754751   40358 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0924 19:14:48.754759   40358 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0924 19:14:48.754766   40358 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0924 19:14:48.754772   40358 command_runner.go:130] > # Example:
	I0924 19:14:48.754777   40358 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0924 19:14:48.754784   40358 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0924 19:14:48.754789   40358 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0924 19:14:48.754795   40358 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0924 19:14:48.754799   40358 command_runner.go:130] > # cpuset = 0
	I0924 19:14:48.754804   40358 command_runner.go:130] > # cpushares = "0-1"
	I0924 19:14:48.754807   40358 command_runner.go:130] > # Where:
	I0924 19:14:48.754814   40358 command_runner.go:130] > # The workload name is workload-type.
	I0924 19:14:48.754821   40358 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0924 19:14:48.754843   40358 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0924 19:14:48.754855   40358 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0924 19:14:48.754866   40358 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0924 19:14:48.754873   40358 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0924 19:14:48.754881   40358 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0924 19:14:48.754890   40358 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0924 19:14:48.754897   40358 command_runner.go:130] > # Default value is set to true
	I0924 19:14:48.754901   40358 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0924 19:14:48.754909   40358 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0924 19:14:48.754913   40358 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0924 19:14:48.754920   40358 command_runner.go:130] > # Default value is set to 'false'
	I0924 19:14:48.754925   40358 command_runner.go:130] > # disable_hostport_mapping = false
	I0924 19:14:48.754931   40358 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0924 19:14:48.754936   40358 command_runner.go:130] > #
	I0924 19:14:48.754942   40358 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0924 19:14:48.754947   40358 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0924 19:14:48.754953   40358 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0924 19:14:48.754958   40358 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0924 19:14:48.754966   40358 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0924 19:14:48.754970   40358 command_runner.go:130] > [crio.image]
	I0924 19:14:48.754975   40358 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0924 19:14:48.754979   40358 command_runner.go:130] > # default_transport = "docker://"
	I0924 19:14:48.754984   40358 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0924 19:14:48.754990   40358 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0924 19:14:48.754993   40358 command_runner.go:130] > # global_auth_file = ""
	I0924 19:14:48.754998   40358 command_runner.go:130] > # The image used to instantiate infra containers.
	I0924 19:14:48.755003   40358 command_runner.go:130] > # This option supports live configuration reload.
	I0924 19:14:48.755007   40358 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0924 19:14:48.755013   40358 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0924 19:14:48.755018   40358 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0924 19:14:48.755023   40358 command_runner.go:130] > # This option supports live configuration reload.
	I0924 19:14:48.755028   40358 command_runner.go:130] > # pause_image_auth_file = ""
	I0924 19:14:48.755033   40358 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0924 19:14:48.755038   40358 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0924 19:14:48.755044   40358 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0924 19:14:48.755049   40358 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0924 19:14:48.755053   40358 command_runner.go:130] > # pause_command = "/pause"
	I0924 19:14:48.755058   40358 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0924 19:14:48.755063   40358 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0924 19:14:48.755069   40358 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0924 19:14:48.755075   40358 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0924 19:14:48.755080   40358 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0924 19:14:48.755086   40358 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0924 19:14:48.755089   40358 command_runner.go:130] > # pinned_images = [
	I0924 19:14:48.755092   40358 command_runner.go:130] > # ]
	I0924 19:14:48.755098   40358 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0924 19:14:48.755103   40358 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0924 19:14:48.755109   40358 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0924 19:14:48.755114   40358 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0924 19:14:48.755119   40358 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0924 19:14:48.755124   40358 command_runner.go:130] > # signature_policy = ""
	I0924 19:14:48.755129   40358 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0924 19:14:48.755137   40358 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0924 19:14:48.755145   40358 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0924 19:14:48.755153   40358 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0924 19:14:48.755160   40358 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0924 19:14:48.755165   40358 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0924 19:14:48.755173   40358 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0924 19:14:48.755182   40358 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0924 19:14:48.755187   40358 command_runner.go:130] > # changing them here.
	I0924 19:14:48.755191   40358 command_runner.go:130] > # insecure_registries = [
	I0924 19:14:48.755196   40358 command_runner.go:130] > # ]
	I0924 19:14:48.755202   40358 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0924 19:14:48.755209   40358 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0924 19:14:48.755214   40358 command_runner.go:130] > # image_volumes = "mkdir"
	I0924 19:14:48.755221   40358 command_runner.go:130] > # Temporary directory to use for storing big files
	I0924 19:14:48.755226   40358 command_runner.go:130] > # big_files_temporary_dir = ""
	I0924 19:14:48.755233   40358 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0924 19:14:48.755237   40358 command_runner.go:130] > # CNI plugins.
	I0924 19:14:48.755241   40358 command_runner.go:130] > [crio.network]
	I0924 19:14:48.755247   40358 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0924 19:14:48.755254   40358 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0924 19:14:48.755258   40358 command_runner.go:130] > # cni_default_network = ""
	I0924 19:14:48.755265   40358 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0924 19:14:48.755270   40358 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0924 19:14:48.755277   40358 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0924 19:14:48.755281   40358 command_runner.go:130] > # plugin_dirs = [
	I0924 19:14:48.755287   40358 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0924 19:14:48.755290   40358 command_runner.go:130] > # ]
	I0924 19:14:48.755296   40358 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0924 19:14:48.755302   40358 command_runner.go:130] > [crio.metrics]
	I0924 19:14:48.755307   40358 command_runner.go:130] > # Globally enable or disable metrics support.
	I0924 19:14:48.755313   40358 command_runner.go:130] > enable_metrics = true
	I0924 19:14:48.755318   40358 command_runner.go:130] > # Specify enabled metrics collectors.
	I0924 19:14:48.755324   40358 command_runner.go:130] > # Per default all metrics are enabled.
	I0924 19:14:48.755330   40358 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0924 19:14:48.755338   40358 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0924 19:14:48.755347   40358 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0924 19:14:48.755353   40358 command_runner.go:130] > # metrics_collectors = [
	I0924 19:14:48.755357   40358 command_runner.go:130] > # 	"operations",
	I0924 19:14:48.755363   40358 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0924 19:14:48.755368   40358 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0924 19:14:48.755374   40358 command_runner.go:130] > # 	"operations_errors",
	I0924 19:14:48.755378   40358 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0924 19:14:48.755384   40358 command_runner.go:130] > # 	"image_pulls_by_name",
	I0924 19:14:48.755388   40358 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0924 19:14:48.755397   40358 command_runner.go:130] > # 	"image_pulls_failures",
	I0924 19:14:48.755403   40358 command_runner.go:130] > # 	"image_pulls_successes",
	I0924 19:14:48.755407   40358 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0924 19:14:48.755414   40358 command_runner.go:130] > # 	"image_layer_reuse",
	I0924 19:14:48.755418   40358 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0924 19:14:48.755424   40358 command_runner.go:130] > # 	"containers_oom_total",
	I0924 19:14:48.755428   40358 command_runner.go:130] > # 	"containers_oom",
	I0924 19:14:48.755434   40358 command_runner.go:130] > # 	"processes_defunct",
	I0924 19:14:48.755438   40358 command_runner.go:130] > # 	"operations_total",
	I0924 19:14:48.755444   40358 command_runner.go:130] > # 	"operations_latency_seconds",
	I0924 19:14:48.755448   40358 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0924 19:14:48.755453   40358 command_runner.go:130] > # 	"operations_errors_total",
	I0924 19:14:48.755461   40358 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0924 19:14:48.755471   40358 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0924 19:14:48.755480   40358 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0924 19:14:48.755490   40358 command_runner.go:130] > # 	"image_pulls_success_total",
	I0924 19:14:48.755499   40358 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0924 19:14:48.755510   40358 command_runner.go:130] > # 	"containers_oom_count_total",
	I0924 19:14:48.755519   40358 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0924 19:14:48.755529   40358 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0924 19:14:48.755536   40358 command_runner.go:130] > # ]
	I0924 19:14:48.755541   40358 command_runner.go:130] > # The port on which the metrics server will listen.
	I0924 19:14:48.755547   40358 command_runner.go:130] > # metrics_port = 9090
	I0924 19:14:48.755553   40358 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0924 19:14:48.755557   40358 command_runner.go:130] > # metrics_socket = ""
	I0924 19:14:48.755564   40358 command_runner.go:130] > # The certificate for the secure metrics server.
	I0924 19:14:48.755569   40358 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0924 19:14:48.755577   40358 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0924 19:14:48.755584   40358 command_runner.go:130] > # certificate on any modification event.
	I0924 19:14:48.755588   40358 command_runner.go:130] > # metrics_cert = ""
	I0924 19:14:48.755594   40358 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0924 19:14:48.755599   40358 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0924 19:14:48.755604   40358 command_runner.go:130] > # metrics_key = ""
	I0924 19:14:48.755609   40358 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0924 19:14:48.755616   40358 command_runner.go:130] > [crio.tracing]
	I0924 19:14:48.755622   40358 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0924 19:14:48.755628   40358 command_runner.go:130] > # enable_tracing = false
	I0924 19:14:48.755634   40358 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0924 19:14:48.755641   40358 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0924 19:14:48.755648   40358 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0924 19:14:48.755655   40358 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0924 19:14:48.755659   40358 command_runner.go:130] > # CRI-O NRI configuration.
	I0924 19:14:48.755665   40358 command_runner.go:130] > [crio.nri]
	I0924 19:14:48.755670   40358 command_runner.go:130] > # Globally enable or disable NRI.
	I0924 19:14:48.755676   40358 command_runner.go:130] > # enable_nri = false
	I0924 19:14:48.755685   40358 command_runner.go:130] > # NRI socket to listen on.
	I0924 19:14:48.755692   40358 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0924 19:14:48.755697   40358 command_runner.go:130] > # NRI plugin directory to use.
	I0924 19:14:48.755703   40358 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0924 19:14:48.755708   40358 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0924 19:14:48.755715   40358 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0924 19:14:48.755720   40358 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0924 19:14:48.755726   40358 command_runner.go:130] > # nri_disable_connections = false
	I0924 19:14:48.755731   40358 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0924 19:14:48.755738   40358 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0924 19:14:48.755743   40358 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0924 19:14:48.755749   40358 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0924 19:14:48.755756   40358 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0924 19:14:48.755761   40358 command_runner.go:130] > [crio.stats]
	I0924 19:14:48.755767   40358 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0924 19:14:48.755774   40358 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0924 19:14:48.755778   40358 command_runner.go:130] > # stats_collection_period = 0
	I0924 19:14:48.755852   40358 cni.go:84] Creating CNI manager for ""
	I0924 19:14:48.755863   40358 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0924 19:14:48.755874   40358 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:14:48.755898   40358 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-624105 NodeName:multinode-624105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:14:48.756024   40358 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-624105"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:14:48.756086   40358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:14:48.765851   40358 command_runner.go:130] > kubeadm
	I0924 19:14:48.765865   40358 command_runner.go:130] > kubectl
	I0924 19:14:48.765870   40358 command_runner.go:130] > kubelet
	I0924 19:14:48.765890   40358 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:14:48.765954   40358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:14:48.774993   40358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0924 19:14:48.790594   40358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:14:48.806557   40358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0924 19:14:48.822268   40358 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I0924 19:14:48.826009   40358 command_runner.go:130] > 192.168.39.206	control-plane.minikube.internal
	I0924 19:14:48.826070   40358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:14:48.956770   40358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:14:48.971072   40358 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105 for IP: 192.168.39.206
	I0924 19:14:48.971099   40358 certs.go:194] generating shared ca certs ...
	I0924 19:14:48.971120   40358 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:14:48.971312   40358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:14:48.971376   40358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:14:48.971392   40358 certs.go:256] generating profile certs ...
	I0924 19:14:48.971497   40358 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/client.key
	I0924 19:14:48.971582   40358 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/apiserver.key.11e7b858
	I0924 19:14:48.971637   40358 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/proxy-client.key
	I0924 19:14:48.971655   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 19:14:48.971678   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 19:14:48.971694   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 19:14:48.971712   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 19:14:48.971732   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 19:14:48.971751   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 19:14:48.971767   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 19:14:48.971781   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 19:14:48.971920   40358 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:14:48.971996   40358 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:14:48.972010   40358 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:14:48.972044   40358 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:14:48.972082   40358 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:14:48.972113   40358 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:14:48.972165   40358 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:14:48.972206   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem -> /usr/share/ca-certificates/10949.pem
	I0924 19:14:48.972225   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> /usr/share/ca-certificates/109492.pem
	I0924 19:14:48.972240   40358 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:14:48.973058   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:14:48.996455   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:14:49.019646   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:14:49.041856   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:14:49.064468   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 19:14:49.086036   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 19:14:49.108625   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:14:49.132800   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/multinode-624105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:14:49.155107   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:14:49.176420   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:14:49.198650   40358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:14:49.221496   40358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:14:49.237212   40358 ssh_runner.go:195] Run: openssl version
	I0924 19:14:49.242192   40358 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0924 19:14:49.242332   40358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:14:49.253966   40358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:14:49.258279   40358 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:14:49.258334   40358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:14:49.258397   40358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:14:49.263832   40358 command_runner.go:130] > 3ec20f2e
	I0924 19:14:49.263890   40358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:14:49.273932   40358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:14:49.285289   40358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:14:49.289491   40358 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:14:49.289622   40358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:14:49.289675   40358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:14:49.295185   40358 command_runner.go:130] > b5213941
	I0924 19:14:49.295246   40358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:14:49.305849   40358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:14:49.318024   40358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:14:49.322171   40358 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:14:49.322412   40358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:14:49.322461   40358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:14:49.327863   40358 command_runner.go:130] > 51391683
	I0924 19:14:49.328062   40358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:14:49.338229   40358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:14:49.342367   40358 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:14:49.342388   40358 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0924 19:14:49.342397   40358 command_runner.go:130] > Device: 253,1	Inode: 7337000     Links: 1
	I0924 19:14:49.342408   40358 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0924 19:14:49.342415   40358 command_runner.go:130] > Access: 2024-09-24 19:07:49.344338370 +0000
	I0924 19:14:49.342419   40358 command_runner.go:130] > Modify: 2024-09-24 19:07:49.344338370 +0000
	I0924 19:14:49.342424   40358 command_runner.go:130] > Change: 2024-09-24 19:07:49.344338370 +0000
	I0924 19:14:49.342429   40358 command_runner.go:130] >  Birth: 2024-09-24 19:07:49.344338370 +0000
	I0924 19:14:49.342480   40358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:14:49.347737   40358 command_runner.go:130] > Certificate will not expire
	I0924 19:14:49.347920   40358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:14:49.352961   40358 command_runner.go:130] > Certificate will not expire
	I0924 19:14:49.353129   40358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:14:49.358358   40358 command_runner.go:130] > Certificate will not expire
	I0924 19:14:49.358403   40358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:14:49.363474   40358 command_runner.go:130] > Certificate will not expire
	I0924 19:14:49.363745   40358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:14:49.368873   40358 command_runner.go:130] > Certificate will not expire
	I0924 19:14:49.369067   40358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:14:49.374451   40358 command_runner.go:130] > Certificate will not expire
	I0924 19:14:49.374512   40358 kubeadm.go:392] StartCluster: {Name:multinode-624105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-624105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.64 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:14:49.374692   40358 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:14:49.374737   40358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:14:49.418288   40358 command_runner.go:130] > c23a3cbd5cfd8c07faf3640dee8757614f355cfe8dc68c0f7ea0950505558571
	I0924 19:14:49.418319   40358 command_runner.go:130] > 779b0041b60cd698d1045f8ae870776842bb36ef5553884956cf806ea012d800
	I0924 19:14:49.418329   40358 command_runner.go:130] > 5c10e265d8db30ac50ab51c5979870ea882d6606e81d05528a47757723a07eb4
	I0924 19:14:49.418340   40358 command_runner.go:130] > 1df74ab6f5ff072ac4f7f36212d6685e8eb667ae625599b24aaca12983220e9f
	I0924 19:14:49.418349   40358 command_runner.go:130] > 214cabb794f93285db3098b6d687d6565105552e0ae66157f25b5bbdcf7e3737
	I0924 19:14:49.418358   40358 command_runner.go:130] > ba04f08547dac200c40e0bc1055251df174fec8d493f6ce21af90b62ecd0f189
	I0924 19:14:49.418370   40358 command_runner.go:130] > cd329fa120f189f45a33adae45f15d67e8eb4e0773abd4026327baf270b2fd33
	I0924 19:14:49.418384   40358 command_runner.go:130] > ca9ffec06dd06e522e3d734ac79bf6ef3dbc4dbf6c87fedf3635d62b17b441e9
	I0924 19:14:49.418411   40358 cri.go:89] found id: "c23a3cbd5cfd8c07faf3640dee8757614f355cfe8dc68c0f7ea0950505558571"
	I0924 19:14:49.418421   40358 cri.go:89] found id: "779b0041b60cd698d1045f8ae870776842bb36ef5553884956cf806ea012d800"
	I0924 19:14:49.418429   40358 cri.go:89] found id: "5c10e265d8db30ac50ab51c5979870ea882d6606e81d05528a47757723a07eb4"
	I0924 19:14:49.418434   40358 cri.go:89] found id: "1df74ab6f5ff072ac4f7f36212d6685e8eb667ae625599b24aaca12983220e9f"
	I0924 19:14:49.418440   40358 cri.go:89] found id: "214cabb794f93285db3098b6d687d6565105552e0ae66157f25b5bbdcf7e3737"
	I0924 19:14:49.418445   40358 cri.go:89] found id: "ba04f08547dac200c40e0bc1055251df174fec8d493f6ce21af90b62ecd0f189"
	I0924 19:14:49.418452   40358 cri.go:89] found id: "cd329fa120f189f45a33adae45f15d67e8eb4e0773abd4026327baf270b2fd33"
	I0924 19:14:49.418457   40358 cri.go:89] found id: "ca9ffec06dd06e522e3d734ac79bf6ef3dbc4dbf6c87fedf3635d62b17b441e9"
	I0924 19:14:49.418462   40358 cri.go:89] found id: ""
	I0924 19:14:49.418517   40358 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 24 19:19:01 multinode-624105 crio[2691]: time="2024-09-24 19:19:01.947760875Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205541947738849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c5b28ca-c023-4220-a34f-a91135377851 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:19:01 multinode-624105 crio[2691]: time="2024-09-24 19:19:01.948223621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3895a2f0-ad6a-4242-b595-3b2581facdd2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:19:01 multinode-624105 crio[2691]: time="2024-09-24 19:19:01.948277896Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3895a2f0-ad6a-4242-b595-3b2581facdd2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:19:01 multinode-624105 crio[2691]: time="2024-09-24 19:19:01.948653232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:debeb8627dc23b7ca74a4c6264401f9f6051c646bb86f8820ddff990470ce7aa,PodSandboxId:d411e6764001804f39246031c5e9769a6069a12cb0fae9241ea76823147c39fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727205329423555433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b22dm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a640a461-f615-435d-9663-c7530e95b0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4e133113611389db3812d520537ee26cebdd0164468a296d13b4be5b35b08,PodSandboxId:88ce5911c653068f50d29c6f8e4d5117b0e629a08150dadb342ecbdfc1eac1ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727205295910154114,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5hztc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 059d1be7-4667-4517-82d5-e3979afead26,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba242595624ecee29081d99f5217488673a44d80057bad90e1a3c9d78809cb71,PodSandboxId:6f80a6fc6a01c2016be2839c5d810f30bd447cd723b4411ab6acf89eebc14a08,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727205295861414182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7bx4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e178-665b-4c5a-9420-8823b9783d98,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34dc96b8e02fa5c03c146b714e6083ba36d74fc9fc054e98eab2394a6e66cc6d,PodSandboxId:7c938e3431dac55dd7abbaf8f4e0b9c2ae99165f7946fa540d619a89ee3fc836,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727205295693893583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc00e955-ea40-4af1-8f79-6edffd374dec,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12f0d3f81d7d524523e95674487533c04331b81e01204a9df8ec84e4af7db9e8,PodSandboxId:c60043af4455b5799d444f464e442c63eef454b7180d6ee04ea4f4dc729d102b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727205295691024519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4sr25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f923c81-b7e3-49e7-bdcc-55c9bb05c558,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8646faab03b02f4a67d2a6dc643b844e5704cb9bde2c74e7744d30a92f062bd,PodSandboxId:1029ee7cd042f33247d0ac45159b80168b7a9c6beeb043cfd55a33cdabd368c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727205291862320434,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95236b1da7ce6dbd963d4e3c6daa9ec,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297f0f1f9170a8f456457d2938156bd59a07ef767f1a5bacc66d775a256d8fb4,PodSandboxId:2eba9b87ef6211bbd9ff281f75364f797016233e1ba4c28c1e055b12593fc3f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727205291800241393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002b0980d8e9353de2c0ce443f9caf6a,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e4ba44e34277911cc6a8014b0bd52f2d3688ead57d591eab0093d823533ef1,PodSandboxId:7b5f19cd9cde3b5a07fb5ee5c74fcf51f9199a1bed3aa626e99f239536c99352,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727205291815589829,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50b5817fd08e77fa481521ea07c2217,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0310cb335531e8803bdc332f5ede315f64cc6cf92f2caff23f038eca744d3bdc,PodSandboxId:87d3d012af8d41b5643c63c49b49f94f1756fbd09e86972bc90acca6b414a9e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727205291798161388,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f61830de29579fef04a3c6b4c1d8b6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702eb68e64dd378a357b7b5eb48c13fbf285b9a0488562875e5369d2cc2e684d,PodSandboxId:49e25cda6a219ddf5332ce53b0a67b3e0df90c36ad6b18fe7bee0b9a13ef2ac5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727204976052370520,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b22dm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a640a461-f615-435d-9663-c7530e95b0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23a3cbd5cfd8c07faf3640dee8757614f355cfe8dc68c0f7ea0950505558571,PodSandboxId:776f5812b09ee792d48ca2e656f195f066f4ccd19cd576e7e5750dfd522e7b6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727204925122691093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7bx4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e178-665b-4c5a-9420-8823b9783d98,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779b0041b60cd698d1045f8ae870776842bb36ef5553884956cf806ea012d800,PodSandboxId:1d20bab8a8dae2063259fed1041135dcf4d258a8e572f63fc8acbaf54f842881,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727204925073837932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: cc00e955-ea40-4af1-8f79-6edffd374dec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1df74ab6f5ff072ac4f7f36212d6685e8eb667ae625599b24aaca12983220e9f,PodSandboxId:e81fc3310529b35cdf216007300d33b72d2bc983a1a13b0bda131398ca529a26,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727204883579640736,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5hztc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 059d1be7-4667-4517-82d5-e3979afead26,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c10e265d8db30ac50ab51c5979870ea882d6606e81d05528a47757723a07eb4,PodSandboxId:667dfa7f0b277a67bc65b35fe1c83dfc39ad019223f3b2011d4fd784becbf57f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727204883580089194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4sr25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f923c81-b7e3-49e7-bdcc
-55c9bb05c558,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214cabb794f93285db3098b6d687d6565105552e0ae66157f25b5bbdcf7e3737,PodSandboxId:701a1c92b5e7dc76e5147b108a7ee1aa501afd67549d4f9bb253210622cf5522,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727204873108027780,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50b581
7fd08e77fa481521ea07c2217,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba04f08547dac200c40e0bc1055251df174fec8d493f6ce21af90b62ecd0f189,PodSandboxId:16b7bc1ab28e4e7e840b9e525c02ab7a1964a4c1a9160f624355a33b5d594c9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727204873096083716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002b0980d8e9353de2c0ce
443f9caf6a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd329fa120f189f45a33adae45f15d67e8eb4e0773abd4026327baf270b2fd33,PodSandboxId:b388e11fd6f335b903b9201aef85ab9cf5d92dc6262f216aa449190cd18f8b49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727204873086507903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95236b1da7ce6dbd963d4e3c6daa9ec,},An
notations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ffec06dd06e522e3d734ac79bf6ef3dbc4dbf6c87fedf3635d62b17b441e9,PodSandboxId:82d82948f36c9b8ccd3efeb23862a0f339fa5b8b66d0342d7e1b6896e8db9886,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727204872936618512,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f61830de29579fef04a3c6b4c1d8b6,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3895a2f0-ad6a-4242-b595-3b2581facdd2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:19:01 multinode-624105 crio[2691]: time="2024-09-24 19:19:01.985244492Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6caf2199-6f02-4a10-9d61-04ffa16451ea name=/runtime.v1.RuntimeService/Version
	Sep 24 19:19:01 multinode-624105 crio[2691]: time="2024-09-24 19:19:01.985319426Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6caf2199-6f02-4a10-9d61-04ffa16451ea name=/runtime.v1.RuntimeService/Version
	Sep 24 19:19:01 multinode-624105 crio[2691]: time="2024-09-24 19:19:01.987271228Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f5e82c0-c72d-41da-ba2b-132abbe18030 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:19:01 multinode-624105 crio[2691]: time="2024-09-24 19:19:01.987972897Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205541987919090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f5e82c0-c72d-41da-ba2b-132abbe18030 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:19:01 multinode-624105 crio[2691]: time="2024-09-24 19:19:01.988544303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48b343ab-41b8-40fa-8558-38f9d51fb9e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:19:01 multinode-624105 crio[2691]: time="2024-09-24 19:19:01.988611752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48b343ab-41b8-40fa-8558-38f9d51fb9e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:19:01 multinode-624105 crio[2691]: time="2024-09-24 19:19:01.988956049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:debeb8627dc23b7ca74a4c6264401f9f6051c646bb86f8820ddff990470ce7aa,PodSandboxId:d411e6764001804f39246031c5e9769a6069a12cb0fae9241ea76823147c39fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727205329423555433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b22dm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a640a461-f615-435d-9663-c7530e95b0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4e133113611389db3812d520537ee26cebdd0164468a296d13b4be5b35b08,PodSandboxId:88ce5911c653068f50d29c6f8e4d5117b0e629a08150dadb342ecbdfc1eac1ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727205295910154114,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5hztc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 059d1be7-4667-4517-82d5-e3979afead26,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba242595624ecee29081d99f5217488673a44d80057bad90e1a3c9d78809cb71,PodSandboxId:6f80a6fc6a01c2016be2839c5d810f30bd447cd723b4411ab6acf89eebc14a08,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727205295861414182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7bx4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e178-665b-4c5a-9420-8823b9783d98,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34dc96b8e02fa5c03c146b714e6083ba36d74fc9fc054e98eab2394a6e66cc6d,PodSandboxId:7c938e3431dac55dd7abbaf8f4e0b9c2ae99165f7946fa540d619a89ee3fc836,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727205295693893583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc00e955-ea40-4af1-8f79-6edffd374dec,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12f0d3f81d7d524523e95674487533c04331b81e01204a9df8ec84e4af7db9e8,PodSandboxId:c60043af4455b5799d444f464e442c63eef454b7180d6ee04ea4f4dc729d102b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727205295691024519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4sr25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f923c81-b7e3-49e7-bdcc-55c9bb05c558,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8646faab03b02f4a67d2a6dc643b844e5704cb9bde2c74e7744d30a92f062bd,PodSandboxId:1029ee7cd042f33247d0ac45159b80168b7a9c6beeb043cfd55a33cdabd368c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727205291862320434,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95236b1da7ce6dbd963d4e3c6daa9ec,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297f0f1f9170a8f456457d2938156bd59a07ef767f1a5bacc66d775a256d8fb4,PodSandboxId:2eba9b87ef6211bbd9ff281f75364f797016233e1ba4c28c1e055b12593fc3f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727205291800241393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002b0980d8e9353de2c0ce443f9caf6a,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e4ba44e34277911cc6a8014b0bd52f2d3688ead57d591eab0093d823533ef1,PodSandboxId:7b5f19cd9cde3b5a07fb5ee5c74fcf51f9199a1bed3aa626e99f239536c99352,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727205291815589829,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50b5817fd08e77fa481521ea07c2217,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0310cb335531e8803bdc332f5ede315f64cc6cf92f2caff23f038eca744d3bdc,PodSandboxId:87d3d012af8d41b5643c63c49b49f94f1756fbd09e86972bc90acca6b414a9e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727205291798161388,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f61830de29579fef04a3c6b4c1d8b6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702eb68e64dd378a357b7b5eb48c13fbf285b9a0488562875e5369d2cc2e684d,PodSandboxId:49e25cda6a219ddf5332ce53b0a67b3e0df90c36ad6b18fe7bee0b9a13ef2ac5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727204976052370520,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b22dm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a640a461-f615-435d-9663-c7530e95b0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23a3cbd5cfd8c07faf3640dee8757614f355cfe8dc68c0f7ea0950505558571,PodSandboxId:776f5812b09ee792d48ca2e656f195f066f4ccd19cd576e7e5750dfd522e7b6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727204925122691093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7bx4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e178-665b-4c5a-9420-8823b9783d98,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779b0041b60cd698d1045f8ae870776842bb36ef5553884956cf806ea012d800,PodSandboxId:1d20bab8a8dae2063259fed1041135dcf4d258a8e572f63fc8acbaf54f842881,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727204925073837932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: cc00e955-ea40-4af1-8f79-6edffd374dec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1df74ab6f5ff072ac4f7f36212d6685e8eb667ae625599b24aaca12983220e9f,PodSandboxId:e81fc3310529b35cdf216007300d33b72d2bc983a1a13b0bda131398ca529a26,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727204883579640736,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5hztc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 059d1be7-4667-4517-82d5-e3979afead26,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c10e265d8db30ac50ab51c5979870ea882d6606e81d05528a47757723a07eb4,PodSandboxId:667dfa7f0b277a67bc65b35fe1c83dfc39ad019223f3b2011d4fd784becbf57f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727204883580089194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4sr25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f923c81-b7e3-49e7-bdcc
-55c9bb05c558,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214cabb794f93285db3098b6d687d6565105552e0ae66157f25b5bbdcf7e3737,PodSandboxId:701a1c92b5e7dc76e5147b108a7ee1aa501afd67549d4f9bb253210622cf5522,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727204873108027780,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50b581
7fd08e77fa481521ea07c2217,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba04f08547dac200c40e0bc1055251df174fec8d493f6ce21af90b62ecd0f189,PodSandboxId:16b7bc1ab28e4e7e840b9e525c02ab7a1964a4c1a9160f624355a33b5d594c9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727204873096083716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002b0980d8e9353de2c0ce
443f9caf6a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd329fa120f189f45a33adae45f15d67e8eb4e0773abd4026327baf270b2fd33,PodSandboxId:b388e11fd6f335b903b9201aef85ab9cf5d92dc6262f216aa449190cd18f8b49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727204873086507903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95236b1da7ce6dbd963d4e3c6daa9ec,},An
notations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ffec06dd06e522e3d734ac79bf6ef3dbc4dbf6c87fedf3635d62b17b441e9,PodSandboxId:82d82948f36c9b8ccd3efeb23862a0f339fa5b8b66d0342d7e1b6896e8db9886,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727204872936618512,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f61830de29579fef04a3c6b4c1d8b6,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48b343ab-41b8-40fa-8558-38f9d51fb9e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:19:02 multinode-624105 crio[2691]: time="2024-09-24 19:19:02.027235881Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7dc87861-f594-4a4b-b577-23a10b5fc277 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:19:02 multinode-624105 crio[2691]: time="2024-09-24 19:19:02.027352195Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7dc87861-f594-4a4b-b577-23a10b5fc277 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:19:02 multinode-624105 crio[2691]: time="2024-09-24 19:19:02.028427306Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78196b64-11d7-4d9a-8be2-118e96a83e3a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:19:02 multinode-624105 crio[2691]: time="2024-09-24 19:19:02.028924223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205542028898428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78196b64-11d7-4d9a-8be2-118e96a83e3a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:19:02 multinode-624105 crio[2691]: time="2024-09-24 19:19:02.029361021Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ceaa9da-8e98-4861-84fc-3b8a6e75c198 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:19:02 multinode-624105 crio[2691]: time="2024-09-24 19:19:02.029432882Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ceaa9da-8e98-4861-84fc-3b8a6e75c198 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:19:02 multinode-624105 crio[2691]: time="2024-09-24 19:19:02.029773727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:debeb8627dc23b7ca74a4c6264401f9f6051c646bb86f8820ddff990470ce7aa,PodSandboxId:d411e6764001804f39246031c5e9769a6069a12cb0fae9241ea76823147c39fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727205329423555433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b22dm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a640a461-f615-435d-9663-c7530e95b0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4e133113611389db3812d520537ee26cebdd0164468a296d13b4be5b35b08,PodSandboxId:88ce5911c653068f50d29c6f8e4d5117b0e629a08150dadb342ecbdfc1eac1ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727205295910154114,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5hztc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 059d1be7-4667-4517-82d5-e3979afead26,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba242595624ecee29081d99f5217488673a44d80057bad90e1a3c9d78809cb71,PodSandboxId:6f80a6fc6a01c2016be2839c5d810f30bd447cd723b4411ab6acf89eebc14a08,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727205295861414182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7bx4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e178-665b-4c5a-9420-8823b9783d98,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34dc96b8e02fa5c03c146b714e6083ba36d74fc9fc054e98eab2394a6e66cc6d,PodSandboxId:7c938e3431dac55dd7abbaf8f4e0b9c2ae99165f7946fa540d619a89ee3fc836,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727205295693893583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc00e955-ea40-4af1-8f79-6edffd374dec,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12f0d3f81d7d524523e95674487533c04331b81e01204a9df8ec84e4af7db9e8,PodSandboxId:c60043af4455b5799d444f464e442c63eef454b7180d6ee04ea4f4dc729d102b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727205295691024519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4sr25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f923c81-b7e3-49e7-bdcc-55c9bb05c558,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8646faab03b02f4a67d2a6dc643b844e5704cb9bde2c74e7744d30a92f062bd,PodSandboxId:1029ee7cd042f33247d0ac45159b80168b7a9c6beeb043cfd55a33cdabd368c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727205291862320434,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95236b1da7ce6dbd963d4e3c6daa9ec,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297f0f1f9170a8f456457d2938156bd59a07ef767f1a5bacc66d775a256d8fb4,PodSandboxId:2eba9b87ef6211bbd9ff281f75364f797016233e1ba4c28c1e055b12593fc3f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727205291800241393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002b0980d8e9353de2c0ce443f9caf6a,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e4ba44e34277911cc6a8014b0bd52f2d3688ead57d591eab0093d823533ef1,PodSandboxId:7b5f19cd9cde3b5a07fb5ee5c74fcf51f9199a1bed3aa626e99f239536c99352,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727205291815589829,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50b5817fd08e77fa481521ea07c2217,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0310cb335531e8803bdc332f5ede315f64cc6cf92f2caff23f038eca744d3bdc,PodSandboxId:87d3d012af8d41b5643c63c49b49f94f1756fbd09e86972bc90acca6b414a9e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727205291798161388,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f61830de29579fef04a3c6b4c1d8b6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702eb68e64dd378a357b7b5eb48c13fbf285b9a0488562875e5369d2cc2e684d,PodSandboxId:49e25cda6a219ddf5332ce53b0a67b3e0df90c36ad6b18fe7bee0b9a13ef2ac5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727204976052370520,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b22dm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a640a461-f615-435d-9663-c7530e95b0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23a3cbd5cfd8c07faf3640dee8757614f355cfe8dc68c0f7ea0950505558571,PodSandboxId:776f5812b09ee792d48ca2e656f195f066f4ccd19cd576e7e5750dfd522e7b6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727204925122691093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7bx4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e178-665b-4c5a-9420-8823b9783d98,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779b0041b60cd698d1045f8ae870776842bb36ef5553884956cf806ea012d800,PodSandboxId:1d20bab8a8dae2063259fed1041135dcf4d258a8e572f63fc8acbaf54f842881,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727204925073837932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: cc00e955-ea40-4af1-8f79-6edffd374dec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1df74ab6f5ff072ac4f7f36212d6685e8eb667ae625599b24aaca12983220e9f,PodSandboxId:e81fc3310529b35cdf216007300d33b72d2bc983a1a13b0bda131398ca529a26,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727204883579640736,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5hztc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 059d1be7-4667-4517-82d5-e3979afead26,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c10e265d8db30ac50ab51c5979870ea882d6606e81d05528a47757723a07eb4,PodSandboxId:667dfa7f0b277a67bc65b35fe1c83dfc39ad019223f3b2011d4fd784becbf57f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727204883580089194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4sr25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f923c81-b7e3-49e7-bdcc
-55c9bb05c558,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214cabb794f93285db3098b6d687d6565105552e0ae66157f25b5bbdcf7e3737,PodSandboxId:701a1c92b5e7dc76e5147b108a7ee1aa501afd67549d4f9bb253210622cf5522,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727204873108027780,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50b581
7fd08e77fa481521ea07c2217,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba04f08547dac200c40e0bc1055251df174fec8d493f6ce21af90b62ecd0f189,PodSandboxId:16b7bc1ab28e4e7e840b9e525c02ab7a1964a4c1a9160f624355a33b5d594c9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727204873096083716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002b0980d8e9353de2c0ce
443f9caf6a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd329fa120f189f45a33adae45f15d67e8eb4e0773abd4026327baf270b2fd33,PodSandboxId:b388e11fd6f335b903b9201aef85ab9cf5d92dc6262f216aa449190cd18f8b49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727204873086507903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95236b1da7ce6dbd963d4e3c6daa9ec,},An
notations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ffec06dd06e522e3d734ac79bf6ef3dbc4dbf6c87fedf3635d62b17b441e9,PodSandboxId:82d82948f36c9b8ccd3efeb23862a0f339fa5b8b66d0342d7e1b6896e8db9886,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727204872936618512,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f61830de29579fef04a3c6b4c1d8b6,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ceaa9da-8e98-4861-84fc-3b8a6e75c198 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:19:02 multinode-624105 crio[2691]: time="2024-09-24 19:19:02.067577563Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88eacafd-b318-4034-9d71-214a99fc91c4 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:19:02 multinode-624105 crio[2691]: time="2024-09-24 19:19:02.067659450Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88eacafd-b318-4034-9d71-214a99fc91c4 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:19:02 multinode-624105 crio[2691]: time="2024-09-24 19:19:02.068819249Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf1de1d4-251e-4e5f-a9ef-31436ef8fb08 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:19:02 multinode-624105 crio[2691]: time="2024-09-24 19:19:02.069216533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205542069191905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf1de1d4-251e-4e5f-a9ef-31436ef8fb08 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:19:02 multinode-624105 crio[2691]: time="2024-09-24 19:19:02.069731304Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=944993ab-c22c-4f92-b969-1f436942c396 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:19:02 multinode-624105 crio[2691]: time="2024-09-24 19:19:02.069787292Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=944993ab-c22c-4f92-b969-1f436942c396 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:19:02 multinode-624105 crio[2691]: time="2024-09-24 19:19:02.070117147Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:debeb8627dc23b7ca74a4c6264401f9f6051c646bb86f8820ddff990470ce7aa,PodSandboxId:d411e6764001804f39246031c5e9769a6069a12cb0fae9241ea76823147c39fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727205329423555433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b22dm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a640a461-f615-435d-9663-c7530e95b0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4e133113611389db3812d520537ee26cebdd0164468a296d13b4be5b35b08,PodSandboxId:88ce5911c653068f50d29c6f8e4d5117b0e629a08150dadb342ecbdfc1eac1ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727205295910154114,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5hztc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 059d1be7-4667-4517-82d5-e3979afead26,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba242595624ecee29081d99f5217488673a44d80057bad90e1a3c9d78809cb71,PodSandboxId:6f80a6fc6a01c2016be2839c5d810f30bd447cd723b4411ab6acf89eebc14a08,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727205295861414182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7bx4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e178-665b-4c5a-9420-8823b9783d98,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34dc96b8e02fa5c03c146b714e6083ba36d74fc9fc054e98eab2394a6e66cc6d,PodSandboxId:7c938e3431dac55dd7abbaf8f4e0b9c2ae99165f7946fa540d619a89ee3fc836,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727205295693893583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc00e955-ea40-4af1-8f79-6edffd374dec,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12f0d3f81d7d524523e95674487533c04331b81e01204a9df8ec84e4af7db9e8,PodSandboxId:c60043af4455b5799d444f464e442c63eef454b7180d6ee04ea4f4dc729d102b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727205295691024519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4sr25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f923c81-b7e3-49e7-bdcc-55c9bb05c558,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8646faab03b02f4a67d2a6dc643b844e5704cb9bde2c74e7744d30a92f062bd,PodSandboxId:1029ee7cd042f33247d0ac45159b80168b7a9c6beeb043cfd55a33cdabd368c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727205291862320434,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95236b1da7ce6dbd963d4e3c6daa9ec,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297f0f1f9170a8f456457d2938156bd59a07ef767f1a5bacc66d775a256d8fb4,PodSandboxId:2eba9b87ef6211bbd9ff281f75364f797016233e1ba4c28c1e055b12593fc3f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727205291800241393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002b0980d8e9353de2c0ce443f9caf6a,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e4ba44e34277911cc6a8014b0bd52f2d3688ead57d591eab0093d823533ef1,PodSandboxId:7b5f19cd9cde3b5a07fb5ee5c74fcf51f9199a1bed3aa626e99f239536c99352,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727205291815589829,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50b5817fd08e77fa481521ea07c2217,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0310cb335531e8803bdc332f5ede315f64cc6cf92f2caff23f038eca744d3bdc,PodSandboxId:87d3d012af8d41b5643c63c49b49f94f1756fbd09e86972bc90acca6b414a9e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727205291798161388,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f61830de29579fef04a3c6b4c1d8b6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702eb68e64dd378a357b7b5eb48c13fbf285b9a0488562875e5369d2cc2e684d,PodSandboxId:49e25cda6a219ddf5332ce53b0a67b3e0df90c36ad6b18fe7bee0b9a13ef2ac5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727204976052370520,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b22dm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a640a461-f615-435d-9663-c7530e95b0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23a3cbd5cfd8c07faf3640dee8757614f355cfe8dc68c0f7ea0950505558571,PodSandboxId:776f5812b09ee792d48ca2e656f195f066f4ccd19cd576e7e5750dfd522e7b6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727204925122691093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7bx4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e178-665b-4c5a-9420-8823b9783d98,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779b0041b60cd698d1045f8ae870776842bb36ef5553884956cf806ea012d800,PodSandboxId:1d20bab8a8dae2063259fed1041135dcf4d258a8e572f63fc8acbaf54f842881,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727204925073837932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: cc00e955-ea40-4af1-8f79-6edffd374dec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1df74ab6f5ff072ac4f7f36212d6685e8eb667ae625599b24aaca12983220e9f,PodSandboxId:e81fc3310529b35cdf216007300d33b72d2bc983a1a13b0bda131398ca529a26,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727204883579640736,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5hztc,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 059d1be7-4667-4517-82d5-e3979afead26,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c10e265d8db30ac50ab51c5979870ea882d6606e81d05528a47757723a07eb4,PodSandboxId:667dfa7f0b277a67bc65b35fe1c83dfc39ad019223f3b2011d4fd784becbf57f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727204883580089194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4sr25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f923c81-b7e3-49e7-bdcc
-55c9bb05c558,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214cabb794f93285db3098b6d687d6565105552e0ae66157f25b5bbdcf7e3737,PodSandboxId:701a1c92b5e7dc76e5147b108a7ee1aa501afd67549d4f9bb253210622cf5522,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727204873108027780,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50b581
7fd08e77fa481521ea07c2217,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba04f08547dac200c40e0bc1055251df174fec8d493f6ce21af90b62ecd0f189,PodSandboxId:16b7bc1ab28e4e7e840b9e525c02ab7a1964a4c1a9160f624355a33b5d594c9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727204873096083716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002b0980d8e9353de2c0ce
443f9caf6a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd329fa120f189f45a33adae45f15d67e8eb4e0773abd4026327baf270b2fd33,PodSandboxId:b388e11fd6f335b903b9201aef85ab9cf5d92dc6262f216aa449190cd18f8b49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727204873086507903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95236b1da7ce6dbd963d4e3c6daa9ec,},An
notations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ffec06dd06e522e3d734ac79bf6ef3dbc4dbf6c87fedf3635d62b17b441e9,PodSandboxId:82d82948f36c9b8ccd3efeb23862a0f339fa5b8b66d0342d7e1b6896e8db9886,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727204872936618512,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-624105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2f61830de29579fef04a3c6b4c1d8b6,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=944993ab-c22c-4f92-b969-1f436942c396 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	debeb8627dc23       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   d411e67640018       busybox-7dff88458-b22dm
	6ee4e13311361       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   88ce5911c6530       kindnet-5hztc
	ba242595624ec       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   6f80a6fc6a01c       coredns-7c65d6cfc9-7bx4l
	34dc96b8e02fa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   7c938e3431dac       storage-provisioner
	12f0d3f81d7d5       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   c60043af4455b       kube-proxy-4sr25
	c8646faab03b0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   1029ee7cd042f       kube-apiserver-multinode-624105
	63e4ba44e3427       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   7b5f19cd9cde3       kube-controller-manager-multinode-624105
	297f0f1f9170a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   2eba9b87ef621       kube-scheduler-multinode-624105
	0310cb335531e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   87d3d012af8d4       etcd-multinode-624105
	702eb68e64dd3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   49e25cda6a219       busybox-7dff88458-b22dm
	c23a3cbd5cfd8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   776f5812b09ee       coredns-7c65d6cfc9-7bx4l
	779b0041b60cd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   1d20bab8a8dae       storage-provisioner
	5c10e265d8db3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   667dfa7f0b277       kube-proxy-4sr25
	1df74ab6f5ff0       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   e81fc3310529b       kindnet-5hztc
	214cabb794f93       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      11 minutes ago      Exited              kube-controller-manager   0                   701a1c92b5e7d       kube-controller-manager-multinode-624105
	ba04f08547dac       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      11 minutes ago      Exited              kube-scheduler            0                   16b7bc1ab28e4       kube-scheduler-multinode-624105
	cd329fa120f18       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      11 minutes ago      Exited              kube-apiserver            0                   b388e11fd6f33       kube-apiserver-multinode-624105
	ca9ffec06dd06       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      11 minutes ago      Exited              etcd                      0                   82d82948f36c9       etcd-multinode-624105
	
	
	==> coredns [ba242595624ecee29081d99f5217488673a44d80057bad90e1a3c9d78809cb71] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35556 - 10213 "HINFO IN 3524561998326851029.7241639463776507848. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026108159s
	
	
	==> coredns [c23a3cbd5cfd8c07faf3640dee8757614f355cfe8dc68c0f7ea0950505558571] <==
	[INFO] 10.244.1.2:41910 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002005567s
	[INFO] 10.244.1.2:37518 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000236585s
	[INFO] 10.244.1.2:48076 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100055s
	[INFO] 10.244.1.2:54152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001322878s
	[INFO] 10.244.1.2:51190 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090554s
	[INFO] 10.244.1.2:55650 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007019s
	[INFO] 10.244.1.2:57655 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011142s
	[INFO] 10.244.0.3:52016 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167023s
	[INFO] 10.244.0.3:46783 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000066374s
	[INFO] 10.244.0.3:52883 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000054553s
	[INFO] 10.244.0.3:55160 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078248s
	[INFO] 10.244.1.2:51362 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179033s
	[INFO] 10.244.1.2:54474 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133922s
	[INFO] 10.244.1.2:57810 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104456s
	[INFO] 10.244.1.2:35217 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000193442s
	[INFO] 10.244.0.3:58692 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019231s
	[INFO] 10.244.0.3:40896 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141448s
	[INFO] 10.244.0.3:40362 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000227151s
	[INFO] 10.244.0.3:49887 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095212s
	[INFO] 10.244.1.2:36728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000223817s
	[INFO] 10.244.1.2:44161 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000133802s
	[INFO] 10.244.1.2:52014 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00019975s
	[INFO] 10.244.1.2:37081 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111589s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-624105
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-624105
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=multinode-624105
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T19_07_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 19:07:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-624105
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 19:18:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 19:14:54 +0000   Tue, 24 Sep 2024 19:07:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 19:14:54 +0000   Tue, 24 Sep 2024 19:07:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 19:14:54 +0000   Tue, 24 Sep 2024 19:07:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 19:14:54 +0000   Tue, 24 Sep 2024 19:08:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    multinode-624105
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 105bf671b3ed4f6e882b36bc7c330a73
	  System UUID:                105bf671-b3ed-4f6e-882b-36bc7c330a73
	  Boot ID:                    c1b43a78-f120-43d0-b77c-4cfca1797fa7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-b22dm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	  kube-system                 coredns-7c65d6cfc9-7bx4l                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-624105                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-5hztc                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-multinode-624105             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-multinode-624105    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-4sr25                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-multinode-624105             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node multinode-624105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node multinode-624105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node multinode-624105 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node multinode-624105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node multinode-624105 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node multinode-624105 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           11m                    node-controller  Node multinode-624105 event: Registered Node multinode-624105 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-624105 status is now: NodeReady
	  Normal  Starting                 4m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m11s (x8 over 4m11s)  kubelet          Node multinode-624105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x8 over 4m11s)  kubelet          Node multinode-624105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x7 over 4m11s)  kubelet          Node multinode-624105 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m4s                   node-controller  Node multinode-624105 event: Registered Node multinode-624105 in Controller
	
	
	Name:               multinode-624105-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-624105-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=multinode-624105
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T19_15_36_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 19:15:35 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-624105-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 19:16:37 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 24 Sep 2024 19:16:06 +0000   Tue, 24 Sep 2024 19:17:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 24 Sep 2024 19:16:06 +0000   Tue, 24 Sep 2024 19:17:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 24 Sep 2024 19:16:06 +0000   Tue, 24 Sep 2024 19:17:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 24 Sep 2024 19:16:06 +0000   Tue, 24 Sep 2024 19:17:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    multinode-624105-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 57aa8182c1a84d5c991ebf30059fa699
	  System UUID:                57aa8182-c1a8-4d5c-991e-bf30059fa699
	  Boot ID:                    f1eb433a-4b82-40b9-96fb-7a46ea2ec550
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ln4qn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m31s
	  kube-system                 kindnet-prfnr              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m49s
	  kube-system                 kube-proxy-wp4bg           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m22s                  kube-proxy       
	  Normal  Starting                 9m45s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  9m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m49s (x2 over 9m50s)  kubelet          Node multinode-624105-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m49s (x2 over 9m50s)  kubelet          Node multinode-624105-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m49s (x2 over 9m50s)  kubelet          Node multinode-624105-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m31s                  kubelet          Node multinode-624105-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m27s (x2 over 3m27s)  kubelet          Node multinode-624105-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s (x2 over 3m27s)  kubelet          Node multinode-624105-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s (x2 over 3m27s)  kubelet          Node multinode-624105-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m8s                   kubelet          Node multinode-624105-m02 status is now: NodeReady
	  Normal  NodeNotReady             104s                   node-controller  Node multinode-624105-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.058731] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062782] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.144551] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.139600] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.263591] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.552997] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.695499] systemd-fstab-generator[873]: Ignoring "noauto" option for root device
	[  +0.054418] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.989191] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.074164] kauditd_printk_skb: 69 callbacks suppressed
	[Sep24 19:08] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.104239] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[ +41.559679] kauditd_printk_skb: 69 callbacks suppressed
	[Sep24 19:09] kauditd_printk_skb: 12 callbacks suppressed
	[Sep24 19:14] systemd-fstab-generator[2616]: Ignoring "noauto" option for root device
	[  +0.141657] systemd-fstab-generator[2628]: Ignoring "noauto" option for root device
	[  +0.160737] systemd-fstab-generator[2642]: Ignoring "noauto" option for root device
	[  +0.140339] systemd-fstab-generator[2654]: Ignoring "noauto" option for root device
	[  +0.281769] systemd-fstab-generator[2682]: Ignoring "noauto" option for root device
	[  +0.640557] systemd-fstab-generator[2777]: Ignoring "noauto" option for root device
	[  +2.092155] systemd-fstab-generator[2894]: Ignoring "noauto" option for root device
	[  +4.656845] kauditd_printk_skb: 184 callbacks suppressed
	[Sep24 19:15] systemd-fstab-generator[3730]: Ignoring "noauto" option for root device
	[  +0.093524] kauditd_printk_skb: 34 callbacks suppressed
	[ +17.825745] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [0310cb335531e8803bdc332f5ede315f64cc6cf92f2caff23f038eca744d3bdc] <==
	{"level":"info","ts":"2024-09-24T19:14:52.242058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 switched to configuration voters=(10182824043138087653)"}
	{"level":"info","ts":"2024-09-24T19:14:52.243781Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b0723a440b02124","local-member-id":"8d50a8842d8d7ae5","added-peer-id":"8d50a8842d8d7ae5","added-peer-peer-urls":["https://192.168.39.206:2380"]}
	{"level":"info","ts":"2024-09-24T19:14:52.243914Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0723a440b02124","local-member-id":"8d50a8842d8d7ae5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:14:52.243961Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:14:52.286627Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-24T19:14:52.286877Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8d50a8842d8d7ae5","initial-advertise-peer-urls":["https://192.168.39.206:2380"],"listen-peer-urls":["https://192.168.39.206:2380"],"advertise-client-urls":["https://192.168.39.206:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.206:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T19:14:52.286926Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-24T19:14:52.287028Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2024-09-24T19:14:52.287055Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2024-09-24T19:14:53.580231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-24T19:14:53.580288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-24T19:14:53.580359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgPreVoteResp from 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2024-09-24T19:14:53.580381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became candidate at term 3"}
	{"level":"info","ts":"2024-09-24T19:14:53.580387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgVoteResp from 8d50a8842d8d7ae5 at term 3"}
	{"level":"info","ts":"2024-09-24T19:14:53.580396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became leader at term 3"}
	{"level":"info","ts":"2024-09-24T19:14:53.580403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8d50a8842d8d7ae5 elected leader 8d50a8842d8d7ae5 at term 3"}
	{"level":"info","ts":"2024-09-24T19:14:53.585386Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8d50a8842d8d7ae5","local-member-attributes":"{Name:multinode-624105 ClientURLs:[https://192.168.39.206:2379]}","request-path":"/0/members/8d50a8842d8d7ae5/attributes","cluster-id":"b0723a440b02124","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T19:14:53.585510Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:14:53.585544Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:14:53.585894Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T19:14:53.585921Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T19:14:53.586624Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:14:53.586758Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:14:53.587515Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.206:2379"}
	{"level":"info","ts":"2024-09-24T19:14:53.587538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [ca9ffec06dd06e522e3d734ac79bf6ef3dbc4dbf6c87fedf3635d62b17b441e9] <==
	{"level":"info","ts":"2024-09-24T19:07:54.114375Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:07:54.114532Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0723a440b02124","local-member-id":"8d50a8842d8d7ae5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:07:54.114640Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:07:54.114671Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:07:54.114694Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T19:07:54.114713Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T19:07:54.114721Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:07:54.115441Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:07:54.117032Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T19:07:54.117291Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:07:54.118028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.206:2379"}
	{"level":"warn","ts":"2024-09-24T19:09:12.985468Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.898148ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8855644931764805458 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:7ae592256ed45f51>","response":"size:41"}
	{"level":"info","ts":"2024-09-24T19:09:20.890078Z","caller":"traceutil/trace.go:171","msg":"trace[1565786653] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"205.882799ms","start":"2024-09-24T19:09:20.684170Z","end":"2024-09-24T19:09:20.890053Z","steps":["trace[1565786653] 'process raft request'  (duration: 205.783285ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T19:10:04.490101Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.692187ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8855644931764805950 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-624105-m03.17f843ccdbf8af52\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-624105-m03.17f843ccdbf8af52\" value_size:642 lease:8855644931764805457 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-24T19:10:04.490312Z","caller":"traceutil/trace.go:171","msg":"trace[1025340962] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"233.684519ms","start":"2024-09-24T19:10:04.256609Z","end":"2024-09-24T19:10:04.490293Z","steps":["trace[1025340962] 'process raft request'  (duration: 74.540529ms)","trace[1025340962] 'compare'  (duration: 158.606401ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T19:13:16.281374Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-24T19:13:16.281430Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-624105","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.206:2380"],"advertise-client-urls":["https://192.168.39.206:2379"]}
	{"level":"warn","ts":"2024-09-24T19:13:16.281510Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T19:13:16.281589Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T19:13:16.335710Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.206:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T19:13:16.335759Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.206:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-24T19:13:16.335811Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8d50a8842d8d7ae5","current-leader-member-id":"8d50a8842d8d7ae5"}
	{"level":"info","ts":"2024-09-24T19:13:16.338446Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2024-09-24T19:13:16.338621Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2024-09-24T19:13:16.338643Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-624105","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.206:2380"],"advertise-client-urls":["https://192.168.39.206:2379"]}
	
	
	==> kernel <==
	 19:19:02 up 11 min,  0 users,  load average: 0.01, 0.09, 0.08
	Linux multinode-624105 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1df74ab6f5ff072ac4f7f36212d6685e8eb667ae625599b24aaca12983220e9f] <==
	I0924 19:12:34.478736       1 main.go:322] Node multinode-624105-m03 has CIDR [10.244.3.0/24] 
	I0924 19:12:44.469142       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:12:44.469245       1 main.go:299] handling current node
	I0924 19:12:44.469274       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:12:44.469293       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:12:44.469477       1 main.go:295] Handling node with IPs: map[192.168.39.64:{}]
	I0924 19:12:44.469508       1 main.go:322] Node multinode-624105-m03 has CIDR [10.244.3.0/24] 
	I0924 19:12:54.469711       1 main.go:295] Handling node with IPs: map[192.168.39.64:{}]
	I0924 19:12:54.469813       1 main.go:322] Node multinode-624105-m03 has CIDR [10.244.3.0/24] 
	I0924 19:12:54.469939       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:12:54.469961       1 main.go:299] handling current node
	I0924 19:12:54.469984       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:12:54.469999       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:13:04.469164       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:13:04.469215       1 main.go:299] handling current node
	I0924 19:13:04.469230       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:13:04.469235       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:13:04.469414       1 main.go:295] Handling node with IPs: map[192.168.39.64:{}]
	I0924 19:13:04.469436       1 main.go:322] Node multinode-624105-m03 has CIDR [10.244.3.0/24] 
	I0924 19:13:14.472124       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:13:14.472185       1 main.go:299] handling current node
	I0924 19:13:14.472209       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:13:14.472215       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:13:14.472311       1 main.go:295] Handling node with IPs: map[192.168.39.64:{}]
	I0924 19:13:14.472385       1 main.go:322] Node multinode-624105-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [6ee4e133113611389db3812d520537ee26cebdd0164468a296d13b4be5b35b08] <==
	I0924 19:17:56.665550       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:18:06.665712       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:18:06.665810       1 main.go:299] handling current node
	I0924 19:18:06.665836       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:18:06.665854       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:18:16.664997       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:18:16.665037       1 main.go:299] handling current node
	I0924 19:18:16.665057       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:18:16.665062       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:18:26.665623       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:18:26.665667       1 main.go:299] handling current node
	I0924 19:18:26.665683       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:18:26.665688       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:18:36.674120       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:18:36.674173       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:18:36.674297       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:18:36.674317       1 main.go:299] handling current node
	I0924 19:18:46.674119       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:18:46.674234       1 main.go:299] handling current node
	I0924 19:18:46.674268       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:18:46.674288       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	I0924 19:18:56.665193       1 main.go:295] Handling node with IPs: map[192.168.39.206:{}]
	I0924 19:18:56.665247       1 main.go:299] handling current node
	I0924 19:18:56.665265       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0924 19:18:56.665270       1 main.go:322] Node multinode-624105-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [c8646faab03b02f4a67d2a6dc643b844e5704cb9bde2c74e7744d30a92f062bd] <==
	I0924 19:14:54.708149       1 policy_source.go:224] refreshing policies
	I0924 19:14:54.727495       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0924 19:14:54.728676       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0924 19:14:54.728703       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0924 19:14:54.734623       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0924 19:14:54.734906       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0924 19:14:54.735607       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0924 19:14:54.735785       1 shared_informer.go:320] Caches are synced for configmaps
	I0924 19:14:54.736808       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0924 19:14:54.739207       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0924 19:14:54.744579       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0924 19:14:54.744739       1 aggregator.go:171] initial CRD sync complete...
	I0924 19:14:54.744771       1 autoregister_controller.go:144] Starting autoregister controller
	I0924 19:14:54.744793       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0924 19:14:54.744815       1 cache.go:39] Caches are synced for autoregister controller
	E0924 19:14:54.756299       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0924 19:14:54.788947       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0924 19:14:55.650866       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0924 19:14:56.647758       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0924 19:14:56.777623       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0924 19:14:56.794311       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0924 19:14:56.878868       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 19:14:56.885968       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0924 19:14:58.232929       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 19:14:58.334089       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [cd329fa120f189f45a33adae45f15d67e8eb4e0773abd4026327baf270b2fd33] <==
	I0924 19:07:56.214068       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0924 19:07:56.218621       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0924 19:07:56.218654       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0924 19:07:56.794590       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 19:07:56.838167       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0924 19:07:56.934779       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0924 19:07:56.940300       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.206]
	I0924 19:07:56.941123       1 controller.go:615] quota admission added evaluator for: endpoints
	I0924 19:07:56.944912       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 19:07:57.288956       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0924 19:07:58.067299       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0924 19:07:58.087139       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0924 19:07:58.103768       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0924 19:08:02.637146       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0924 19:08:02.937596       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0924 19:09:37.390091       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35286: use of closed network connection
	E0924 19:09:37.551590       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35308: use of closed network connection
	E0924 19:09:37.739795       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35314: use of closed network connection
	E0924 19:09:37.901745       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35324: use of closed network connection
	E0924 19:09:38.073095       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35330: use of closed network connection
	E0924 19:09:38.236102       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35342: use of closed network connection
	E0924 19:09:38.506696       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35358: use of closed network connection
	E0924 19:09:38.660508       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35368: use of closed network connection
	E0924 19:09:38.997708       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:35410: use of closed network connection
	I0924 19:13:16.280149       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [214cabb794f93285db3098b6d687d6565105552e0ae66157f25b5bbdcf7e3737] <==
	I0924 19:10:52.268973       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:10:52.269146       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-624105-m02"
	I0924 19:10:53.329936       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-624105-m03\" does not exist"
	I0924 19:10:53.330027       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-624105-m02"
	I0924 19:10:53.345210       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-624105-m03" podCIDRs=["10.244.3.0/24"]
	I0924 19:10:53.345281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:10:53.345359       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:10:53.359047       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:10:53.766711       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:10:54.081975       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:10:57.139657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:11:03.664016       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:11:11.166948       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-624105-m02"
	I0924 19:11:11.167051       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:11:11.183020       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:11:12.078584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:11:57.098460       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-624105-m03"
	I0924 19:11:57.098740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m02"
	I0924 19:11:57.100814       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:11:57.120559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m02"
	I0924 19:11:57.120808       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:11:57.154394       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.471165ms"
	I0924 19:11:57.154531       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.596µs"
	I0924 19:12:02.163865       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m02"
	I0924 19:12:12.235078       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	
	
	==> kube-controller-manager [63e4ba44e34277911cc6a8014b0bd52f2d3688ead57d591eab0093d823533ef1] <==
	I0924 19:16:13.504646       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-624105-m02"
	I0924 19:16:13.525674       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-624105-m03" podCIDRs=["10.244.2.0/24"]
	I0924 19:16:13.525715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:13.525735       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:13.944465       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:14.262858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:18.346844       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:23.648109       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:32.444289       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:32.444513       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-624105-m02"
	I0924 19:16:32.455263       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:33.255953       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:37.260082       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:37.272897       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:16:37.693675       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-624105-m02"
	I0924 19:16:37.694164       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m03"
	I0924 19:17:18.065828       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-d2292"
	I0924 19:17:18.096070       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-d2292"
	I0924 19:17:18.096195       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-hl4xz"
	I0924 19:17:18.124719       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-hl4xz"
	I0924 19:17:18.273857       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m02"
	I0924 19:17:18.291584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m02"
	I0924 19:17:18.296408       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.500303ms"
	I0924 19:17:18.296603       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="71.91µs"
	I0924 19:17:23.362482       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-624105-m02"
	
	
	==> kube-proxy [12f0d3f81d7d524523e95674487533c04331b81e01204a9df8ec84e4af7db9e8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 19:14:56.045447       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 19:14:56.055734       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.206"]
	E0924 19:14:56.055808       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 19:14:56.108730       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 19:14:56.108781       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 19:14:56.108807       1 server_linux.go:169] "Using iptables Proxier"
	I0924 19:14:56.111536       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 19:14:56.111827       1 server.go:483] "Version info" version="v1.31.1"
	I0924 19:14:56.112104       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:14:56.113202       1 config.go:199] "Starting service config controller"
	I0924 19:14:56.113428       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 19:14:56.113518       1 config.go:105] "Starting endpoint slice config controller"
	I0924 19:14:56.113544       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 19:14:56.113962       1 config.go:328] "Starting node config controller"
	I0924 19:14:56.113997       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 19:14:56.214481       1 shared_informer.go:320] Caches are synced for node config
	I0924 19:14:56.214569       1 shared_informer.go:320] Caches are synced for service config
	I0924 19:14:56.214580       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [5c10e265d8db30ac50ab51c5979870ea882d6606e81d05528a47757723a07eb4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 19:08:04.017968       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 19:08:04.102710       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.206"]
	E0924 19:08:04.102925       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 19:08:04.360294       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 19:08:04.360946       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 19:08:04.361049       1 server_linux.go:169] "Using iptables Proxier"
	I0924 19:08:04.366546       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 19:08:04.367472       1 server.go:483] "Version info" version="v1.31.1"
	I0924 19:08:04.367704       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:08:04.372893       1 config.go:199] "Starting service config controller"
	I0924 19:08:04.374819       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 19:08:04.373152       1 config.go:105] "Starting endpoint slice config controller"
	I0924 19:08:04.374874       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 19:08:04.374888       1 config.go:328] "Starting node config controller"
	I0924 19:08:04.374905       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 19:08:04.474985       1 shared_informer.go:320] Caches are synced for node config
	I0924 19:08:04.475038       1 shared_informer.go:320] Caches are synced for service config
	I0924 19:08:04.475068       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [297f0f1f9170a8f456457d2938156bd59a07ef767f1a5bacc66d775a256d8fb4] <==
	I0924 19:14:53.253368       1 serving.go:386] Generated self-signed cert in-memory
	W0924 19:14:54.674691       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 19:14:54.674835       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 19:14:54.675013       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 19:14:54.675039       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 19:14:54.744360       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0924 19:14:54.744979       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:14:54.752689       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0924 19:14:54.753650       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0924 19:14:54.753678       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 19:14:54.753700       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0924 19:14:54.853931       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ba04f08547dac200c40e0bc1055251df174fec8d493f6ce21af90b62ecd0f189] <==
	E0924 19:07:55.330156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:55.328667       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 19:07:55.330258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:55.326730       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 19:07:55.330405       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 19:07:55.326942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 19:07:55.330508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:56.177752       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 19:07:56.177863       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 19:07:56.355728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 19:07:56.355937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:56.379838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 19:07:56.379925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:56.556159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 19:07:56.556283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:56.561896       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0924 19:07:56.562069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:56.576356       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 19:07:56.576530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:56.578593       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 19:07:56.578676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 19:07:56.580727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 19:07:56.580762       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0924 19:07:58.822917       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0924 19:13:16.290195       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 24 19:17:51 multinode-624105 kubelet[2901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 19:17:51 multinode-624105 kubelet[2901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 19:17:51 multinode-624105 kubelet[2901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 19:17:51 multinode-624105 kubelet[2901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 19:17:51 multinode-624105 kubelet[2901]: E0924 19:17:51.290220    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205471289606881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:17:51 multinode-624105 kubelet[2901]: E0924 19:17:51.290254    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205471289606881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:18:01 multinode-624105 kubelet[2901]: E0924 19:18:01.292423    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205481292123116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:18:01 multinode-624105 kubelet[2901]: E0924 19:18:01.293143    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205481292123116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:18:11 multinode-624105 kubelet[2901]: E0924 19:18:11.294577    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205491294103461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:18:11 multinode-624105 kubelet[2901]: E0924 19:18:11.294613    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205491294103461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:18:21 multinode-624105 kubelet[2901]: E0924 19:18:21.297698    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205501297309855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:18:21 multinode-624105 kubelet[2901]: E0924 19:18:21.298035    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205501297309855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:18:31 multinode-624105 kubelet[2901]: E0924 19:18:31.300019    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205511299749945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:18:31 multinode-624105 kubelet[2901]: E0924 19:18:31.300045    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205511299749945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:18:41 multinode-624105 kubelet[2901]: E0924 19:18:41.301528    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205521300747008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:18:41 multinode-624105 kubelet[2901]: E0924 19:18:41.301564    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205521300747008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:18:51 multinode-624105 kubelet[2901]: E0924 19:18:51.221445    2901 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 19:18:51 multinode-624105 kubelet[2901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 19:18:51 multinode-624105 kubelet[2901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 19:18:51 multinode-624105 kubelet[2901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 19:18:51 multinode-624105 kubelet[2901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 19:18:51 multinode-624105 kubelet[2901]: E0924 19:18:51.303515    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205531302717740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:18:51 multinode-624105 kubelet[2901]: E0924 19:18:51.303562    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205531302717740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:19:01 multinode-624105 kubelet[2901]: E0924 19:19:01.305036    2901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205541304583408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:19:01 multinode-624105 kubelet[2901]: E0924 19:19:01.305062    2901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205541304583408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:19:01.686370   42789 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19700-3751/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-624105 -n multinode-624105
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-624105 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.54s)

                                                
                                    
x
+
TestPreload (158.84s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-184922 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-184922 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m26.573838858s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-184922 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-184922 image pull gcr.io/k8s-minikube/busybox: (2.374737436s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-184922
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-184922: (6.587115083s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-184922 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0924 19:24:49.789907   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-184922 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.540996285s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-184922 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-09-24 19:25:17.281265103 +0000 UTC m=+3926.383276833
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-184922 -n test-preload-184922
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-184922 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n multinode-624105 sudo cat                                       | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | /home/docker/cp-test_multinode-624105-m03_multinode-624105.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-624105 cp multinode-624105-m03:/home/docker/cp-test.txt                       | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m02:/home/docker/cp-test_multinode-624105-m03_multinode-624105-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n                                                                 | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | multinode-624105-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-624105 ssh -n multinode-624105-m02 sudo cat                                   | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	|         | /home/docker/cp-test_multinode-624105-m03_multinode-624105-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-624105 node stop m03                                                          | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:10 UTC |
	| node    | multinode-624105 node start                                                             | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:10 UTC | 24 Sep 24 19:11 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-624105                                                                | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:11 UTC |                     |
	| stop    | -p multinode-624105                                                                     | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:11 UTC |                     |
	| start   | -p multinode-624105                                                                     | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:13 UTC | 24 Sep 24 19:16 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-624105                                                                | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:16 UTC |                     |
	| node    | multinode-624105 node delete                                                            | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:16 UTC | 24 Sep 24 19:16 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-624105 stop                                                                   | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:16 UTC |                     |
	| start   | -p multinode-624105                                                                     | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:19 UTC | 24 Sep 24 19:21 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-624105                                                                | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:21 UTC |                     |
	| start   | -p multinode-624105-m02                                                                 | multinode-624105-m02 | jenkins | v1.34.0 | 24 Sep 24 19:21 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-624105-m03                                                                 | multinode-624105-m03 | jenkins | v1.34.0 | 24 Sep 24 19:21 UTC | 24 Sep 24 19:22 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-624105                                                                 | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:22 UTC |                     |
	| delete  | -p multinode-624105-m03                                                                 | multinode-624105-m03 | jenkins | v1.34.0 | 24 Sep 24 19:22 UTC | 24 Sep 24 19:22 UTC |
	| delete  | -p multinode-624105                                                                     | multinode-624105     | jenkins | v1.34.0 | 24 Sep 24 19:22 UTC | 24 Sep 24 19:22 UTC |
	| start   | -p test-preload-184922                                                                  | test-preload-184922  | jenkins | v1.34.0 | 24 Sep 24 19:22 UTC | 24 Sep 24 19:24 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-184922 image pull                                                          | test-preload-184922  | jenkins | v1.34.0 | 24 Sep 24 19:24 UTC | 24 Sep 24 19:24 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-184922                                                                  | test-preload-184922  | jenkins | v1.34.0 | 24 Sep 24 19:24 UTC | 24 Sep 24 19:24 UTC |
	| start   | -p test-preload-184922                                                                  | test-preload-184922  | jenkins | v1.34.0 | 24 Sep 24 19:24 UTC | 24 Sep 24 19:25 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-184922 image list                                                          | test-preload-184922  | jenkins | v1.34.0 | 24 Sep 24 19:25 UTC | 24 Sep 24 19:25 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 19:24:16
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 19:24:16.575612   45113 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:24:16.575702   45113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:24:16.575710   45113 out.go:358] Setting ErrFile to fd 2...
	I0924 19:24:16.575714   45113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:24:16.575861   45113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:24:16.576329   45113 out.go:352] Setting JSON to false
	I0924 19:24:16.577180   45113 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4008,"bootTime":1727201849,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 19:24:16.577265   45113 start.go:139] virtualization: kvm guest
	I0924 19:24:16.579361   45113 out.go:177] * [test-preload-184922] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 19:24:16.580625   45113 notify.go:220] Checking for updates...
	I0924 19:24:16.580680   45113 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 19:24:16.582213   45113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 19:24:16.583510   45113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:24:16.584938   45113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:24:16.586208   45113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 19:24:16.587446   45113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 19:24:16.588902   45113 config.go:182] Loaded profile config "test-preload-184922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0924 19:24:16.589286   45113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:24:16.589340   45113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:24:16.603683   45113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45053
	I0924 19:24:16.604204   45113 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:24:16.604804   45113 main.go:141] libmachine: Using API Version  1
	I0924 19:24:16.604837   45113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:24:16.605130   45113 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:24:16.605301   45113 main.go:141] libmachine: (test-preload-184922) Calling .DriverName
	I0924 19:24:16.607000   45113 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 19:24:16.608213   45113 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 19:24:16.608482   45113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:24:16.608516   45113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:24:16.622488   45113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38757
	I0924 19:24:16.622900   45113 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:24:16.623339   45113 main.go:141] libmachine: Using API Version  1
	I0924 19:24:16.623363   45113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:24:16.623652   45113 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:24:16.623811   45113 main.go:141] libmachine: (test-preload-184922) Calling .DriverName
	I0924 19:24:16.657211   45113 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 19:24:16.658379   45113 start.go:297] selected driver: kvm2
	I0924 19:24:16.658396   45113 start.go:901] validating driver "kvm2" against &{Name:test-preload-184922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-184922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:24:16.658520   45113 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 19:24:16.659640   45113 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:24:16.659743   45113 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 19:24:16.674429   45113 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 19:24:16.675293   45113 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:24:16.675377   45113 cni.go:84] Creating CNI manager for ""
	I0924 19:24:16.675430   45113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:24:16.675561   45113 start.go:340] cluster config:
	{Name:test-preload-184922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-184922 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:24:16.675738   45113 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:24:16.677483   45113 out.go:177] * Starting "test-preload-184922" primary control-plane node in "test-preload-184922" cluster
	I0924 19:24:16.678542   45113 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0924 19:24:16.701329   45113 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0924 19:24:16.701348   45113 cache.go:56] Caching tarball of preloaded images
	I0924 19:24:16.701476   45113 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0924 19:24:16.703070   45113 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0924 19:24:16.704342   45113 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0924 19:24:16.729495   45113 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0924 19:24:25.334812   45113 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0924 19:24:25.334918   45113 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0924 19:24:26.171961   45113 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0924 19:24:26.172095   45113 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/test-preload-184922/config.json ...
	I0924 19:24:26.172318   45113 start.go:360] acquireMachinesLock for test-preload-184922: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:24:26.172375   45113 start.go:364] duration metric: took 36.772µs to acquireMachinesLock for "test-preload-184922"
	I0924 19:24:26.172389   45113 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:24:26.172395   45113 fix.go:54] fixHost starting: 
	I0924 19:24:26.172685   45113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:24:26.172719   45113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:24:26.186989   45113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37961
	I0924 19:24:26.187425   45113 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:24:26.187998   45113 main.go:141] libmachine: Using API Version  1
	I0924 19:24:26.188025   45113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:24:26.188320   45113 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:24:26.188496   45113 main.go:141] libmachine: (test-preload-184922) Calling .DriverName
	I0924 19:24:26.188626   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetState
	I0924 19:24:26.190305   45113 fix.go:112] recreateIfNeeded on test-preload-184922: state=Stopped err=<nil>
	I0924 19:24:26.190327   45113 main.go:141] libmachine: (test-preload-184922) Calling .DriverName
	W0924 19:24:26.190474   45113 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:24:26.192563   45113 out.go:177] * Restarting existing kvm2 VM for "test-preload-184922" ...
	I0924 19:24:26.193859   45113 main.go:141] libmachine: (test-preload-184922) Calling .Start
	I0924 19:24:26.194004   45113 main.go:141] libmachine: (test-preload-184922) Ensuring networks are active...
	I0924 19:24:26.194702   45113 main.go:141] libmachine: (test-preload-184922) Ensuring network default is active
	I0924 19:24:26.194982   45113 main.go:141] libmachine: (test-preload-184922) Ensuring network mk-test-preload-184922 is active
	I0924 19:24:26.195356   45113 main.go:141] libmachine: (test-preload-184922) Getting domain xml...
	I0924 19:24:26.196076   45113 main.go:141] libmachine: (test-preload-184922) Creating domain...
	I0924 19:24:27.373548   45113 main.go:141] libmachine: (test-preload-184922) Waiting to get IP...
	I0924 19:24:27.374371   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:27.374763   45113 main.go:141] libmachine: (test-preload-184922) DBG | unable to find current IP address of domain test-preload-184922 in network mk-test-preload-184922
	I0924 19:24:27.374809   45113 main.go:141] libmachine: (test-preload-184922) DBG | I0924 19:24:27.374736   45180 retry.go:31] will retry after 292.644027ms: waiting for machine to come up
	I0924 19:24:27.669313   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:27.669741   45113 main.go:141] libmachine: (test-preload-184922) DBG | unable to find current IP address of domain test-preload-184922 in network mk-test-preload-184922
	I0924 19:24:27.669777   45113 main.go:141] libmachine: (test-preload-184922) DBG | I0924 19:24:27.669697   45180 retry.go:31] will retry after 241.94186ms: waiting for machine to come up
	I0924 19:24:27.913282   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:27.913646   45113 main.go:141] libmachine: (test-preload-184922) DBG | unable to find current IP address of domain test-preload-184922 in network mk-test-preload-184922
	I0924 19:24:27.913684   45113 main.go:141] libmachine: (test-preload-184922) DBG | I0924 19:24:27.913606   45180 retry.go:31] will retry after 407.833551ms: waiting for machine to come up
	I0924 19:24:28.323310   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:28.323755   45113 main.go:141] libmachine: (test-preload-184922) DBG | unable to find current IP address of domain test-preload-184922 in network mk-test-preload-184922
	I0924 19:24:28.323781   45113 main.go:141] libmachine: (test-preload-184922) DBG | I0924 19:24:28.323701   45180 retry.go:31] will retry after 551.822296ms: waiting for machine to come up
	I0924 19:24:28.877429   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:28.877891   45113 main.go:141] libmachine: (test-preload-184922) DBG | unable to find current IP address of domain test-preload-184922 in network mk-test-preload-184922
	I0924 19:24:28.877919   45113 main.go:141] libmachine: (test-preload-184922) DBG | I0924 19:24:28.877832   45180 retry.go:31] will retry after 757.433462ms: waiting for machine to come up
	I0924 19:24:29.636424   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:29.636802   45113 main.go:141] libmachine: (test-preload-184922) DBG | unable to find current IP address of domain test-preload-184922 in network mk-test-preload-184922
	I0924 19:24:29.636827   45113 main.go:141] libmachine: (test-preload-184922) DBG | I0924 19:24:29.636765   45180 retry.go:31] will retry after 728.207066ms: waiting for machine to come up
	I0924 19:24:30.366117   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:30.366522   45113 main.go:141] libmachine: (test-preload-184922) DBG | unable to find current IP address of domain test-preload-184922 in network mk-test-preload-184922
	I0924 19:24:30.366554   45113 main.go:141] libmachine: (test-preload-184922) DBG | I0924 19:24:30.366471   45180 retry.go:31] will retry after 1.117475191s: waiting for machine to come up
	I0924 19:24:31.485082   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:31.485585   45113 main.go:141] libmachine: (test-preload-184922) DBG | unable to find current IP address of domain test-preload-184922 in network mk-test-preload-184922
	I0924 19:24:31.485608   45113 main.go:141] libmachine: (test-preload-184922) DBG | I0924 19:24:31.485516   45180 retry.go:31] will retry after 1.181703398s: waiting for machine to come up
	I0924 19:24:32.668756   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:32.669176   45113 main.go:141] libmachine: (test-preload-184922) DBG | unable to find current IP address of domain test-preload-184922 in network mk-test-preload-184922
	I0924 19:24:32.669261   45113 main.go:141] libmachine: (test-preload-184922) DBG | I0924 19:24:32.669201   45180 retry.go:31] will retry after 1.76298049s: waiting for machine to come up
	I0924 19:24:34.434105   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:34.434517   45113 main.go:141] libmachine: (test-preload-184922) DBG | unable to find current IP address of domain test-preload-184922 in network mk-test-preload-184922
	I0924 19:24:34.434538   45113 main.go:141] libmachine: (test-preload-184922) DBG | I0924 19:24:34.434498   45180 retry.go:31] will retry after 1.885232999s: waiting for machine to come up
	I0924 19:24:36.320858   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:36.321215   45113 main.go:141] libmachine: (test-preload-184922) DBG | unable to find current IP address of domain test-preload-184922 in network mk-test-preload-184922
	I0924 19:24:36.321242   45113 main.go:141] libmachine: (test-preload-184922) DBG | I0924 19:24:36.321177   45180 retry.go:31] will retry after 2.778012748s: waiting for machine to come up
	I0924 19:24:39.102043   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:39.102443   45113 main.go:141] libmachine: (test-preload-184922) DBG | unable to find current IP address of domain test-preload-184922 in network mk-test-preload-184922
	I0924 19:24:39.102466   45113 main.go:141] libmachine: (test-preload-184922) DBG | I0924 19:24:39.102404   45180 retry.go:31] will retry after 3.458446058s: waiting for machine to come up
	I0924 19:24:42.562482   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:42.562905   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has current primary IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:42.562936   45113 main.go:141] libmachine: (test-preload-184922) Found IP for machine: 192.168.39.144
	I0924 19:24:42.562950   45113 main.go:141] libmachine: (test-preload-184922) Reserving static IP address...
	I0924 19:24:42.563282   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "test-preload-184922", mac: "52:54:00:79:63:3e", ip: "192.168.39.144"} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:24:42.563306   45113 main.go:141] libmachine: (test-preload-184922) Reserved static IP address: 192.168.39.144
	I0924 19:24:42.563323   45113 main.go:141] libmachine: (test-preload-184922) DBG | skip adding static IP to network mk-test-preload-184922 - found existing host DHCP lease matching {name: "test-preload-184922", mac: "52:54:00:79:63:3e", ip: "192.168.39.144"}
	I0924 19:24:42.563341   45113 main.go:141] libmachine: (test-preload-184922) DBG | Getting to WaitForSSH function...
	I0924 19:24:42.563355   45113 main.go:141] libmachine: (test-preload-184922) Waiting for SSH to be available...
	I0924 19:24:42.565261   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:42.565579   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:24:42.565613   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:42.565634   45113 main.go:141] libmachine: (test-preload-184922) DBG | Using SSH client type: external
	I0924 19:24:42.565696   45113 main.go:141] libmachine: (test-preload-184922) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/test-preload-184922/id_rsa (-rw-------)
	I0924 19:24:42.565736   45113 main.go:141] libmachine: (test-preload-184922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/test-preload-184922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:24:42.565754   45113 main.go:141] libmachine: (test-preload-184922) DBG | About to run SSH command:
	I0924 19:24:42.565764   45113 main.go:141] libmachine: (test-preload-184922) DBG | exit 0
	I0924 19:24:42.686320   45113 main.go:141] libmachine: (test-preload-184922) DBG | SSH cmd err, output: <nil>: 
	I0924 19:24:42.686649   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetConfigRaw
	I0924 19:24:42.687219   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetIP
	I0924 19:24:42.689483   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:42.689775   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:24:42.689799   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:42.689996   45113 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/test-preload-184922/config.json ...
	I0924 19:24:42.690232   45113 machine.go:93] provisionDockerMachine start ...
	I0924 19:24:42.690250   45113 main.go:141] libmachine: (test-preload-184922) Calling .DriverName
	I0924 19:24:42.690421   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHHostname
	I0924 19:24:42.692804   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:42.693127   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:24:42.693148   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:42.693250   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHPort
	I0924 19:24:42.693394   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHKeyPath
	I0924 19:24:42.693537   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHKeyPath
	I0924 19:24:42.693631   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHUsername
	I0924 19:24:42.693758   45113 main.go:141] libmachine: Using SSH client type: native
	I0924 19:24:42.693965   45113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0924 19:24:42.693978   45113 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:24:42.794417   45113 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:24:42.794439   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetMachineName
	I0924 19:24:42.794667   45113 buildroot.go:166] provisioning hostname "test-preload-184922"
	I0924 19:24:42.794695   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetMachineName
	I0924 19:24:42.794991   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHHostname
	I0924 19:24:42.797394   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:42.797798   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:24:42.797825   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:42.797956   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHPort
	I0924 19:24:42.798123   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHKeyPath
	I0924 19:24:42.798259   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHKeyPath
	I0924 19:24:42.798366   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHUsername
	I0924 19:24:42.798497   45113 main.go:141] libmachine: Using SSH client type: native
	I0924 19:24:42.798672   45113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0924 19:24:42.798682   45113 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-184922 && echo "test-preload-184922" | sudo tee /etc/hostname
	I0924 19:24:42.910732   45113 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-184922
	
	I0924 19:24:42.910759   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHHostname
	I0924 19:24:42.913649   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:42.914015   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:24:42.914043   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:42.914196   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHPort
	I0924 19:24:42.914425   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHKeyPath
	I0924 19:24:42.914582   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHKeyPath
	I0924 19:24:42.914733   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHUsername
	I0924 19:24:42.914888   45113 main.go:141] libmachine: Using SSH client type: native
	I0924 19:24:42.915050   45113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0924 19:24:42.915067   45113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-184922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-184922/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-184922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:24:43.026582   45113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:24:43.026607   45113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:24:43.026633   45113 buildroot.go:174] setting up certificates
	I0924 19:24:43.026642   45113 provision.go:84] configureAuth start
	I0924 19:24:43.026650   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetMachineName
	I0924 19:24:43.026941   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetIP
	I0924 19:24:43.029418   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.029726   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:24:43.029755   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.029899   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHHostname
	I0924 19:24:43.032077   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.032383   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:24:43.032406   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.032537   45113 provision.go:143] copyHostCerts
	I0924 19:24:43.032607   45113 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:24:43.032619   45113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:24:43.032697   45113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:24:43.032798   45113 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:24:43.032808   45113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:24:43.032851   45113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:24:43.032925   45113 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:24:43.032935   45113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:24:43.032970   45113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:24:43.033036   45113 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.test-preload-184922 san=[127.0.0.1 192.168.39.144 localhost minikube test-preload-184922]
	I0924 19:24:43.285570   45113 provision.go:177] copyRemoteCerts
	I0924 19:24:43.285651   45113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:24:43.285677   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHHostname
	I0924 19:24:43.288191   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.288484   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:24:43.288515   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.288684   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHPort
	I0924 19:24:43.288852   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHKeyPath
	I0924 19:24:43.288991   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHUsername
	I0924 19:24:43.289100   45113 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/test-preload-184922/id_rsa Username:docker}
	I0924 19:24:43.367918   45113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:24:43.389660   45113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0924 19:24:43.411526   45113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 19:24:43.433102   45113 provision.go:87] duration metric: took 406.449843ms to configureAuth
	I0924 19:24:43.433127   45113 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:24:43.433302   45113 config.go:182] Loaded profile config "test-preload-184922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0924 19:24:43.433380   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHHostname
	I0924 19:24:43.435801   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.436078   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:24:43.436109   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.436235   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHPort
	I0924 19:24:43.436384   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHKeyPath
	I0924 19:24:43.436503   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHKeyPath
	I0924 19:24:43.436660   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHUsername
	I0924 19:24:43.436802   45113 main.go:141] libmachine: Using SSH client type: native
	I0924 19:24:43.437014   45113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0924 19:24:43.437033   45113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:24:43.640199   45113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:24:43.640225   45113 machine.go:96] duration metric: took 949.979521ms to provisionDockerMachine
	I0924 19:24:43.640239   45113 start.go:293] postStartSetup for "test-preload-184922" (driver="kvm2")
	I0924 19:24:43.640252   45113 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:24:43.640272   45113 main.go:141] libmachine: (test-preload-184922) Calling .DriverName
	I0924 19:24:43.640624   45113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:24:43.640655   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHHostname
	I0924 19:24:43.643058   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.643370   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:24:43.643396   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.643570   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHPort
	I0924 19:24:43.643747   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHKeyPath
	I0924 19:24:43.643883   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHUsername
	I0924 19:24:43.644003   45113 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/test-preload-184922/id_rsa Username:docker}
	I0924 19:24:43.724448   45113 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:24:43.728501   45113 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:24:43.728521   45113 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:24:43.728588   45113 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:24:43.728664   45113 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:24:43.728769   45113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:24:43.737651   45113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:24:43.759120   45113 start.go:296] duration metric: took 118.868481ms for postStartSetup
	I0924 19:24:43.759176   45113 fix.go:56] duration metric: took 17.586776682s for fixHost
	I0924 19:24:43.759195   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHHostname
	I0924 19:24:43.761584   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.761973   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:24:43.761998   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.762166   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHPort
	I0924 19:24:43.762324   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHKeyPath
	I0924 19:24:43.762490   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHKeyPath
	I0924 19:24:43.762594   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHUsername
	I0924 19:24:43.762730   45113 main.go:141] libmachine: Using SSH client type: native
	I0924 19:24:43.762923   45113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0924 19:24:43.762935   45113 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:24:43.863077   45113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727205883.837779819
	
	I0924 19:24:43.863108   45113 fix.go:216] guest clock: 1727205883.837779819
	I0924 19:24:43.863115   45113 fix.go:229] Guest: 2024-09-24 19:24:43.837779819 +0000 UTC Remote: 2024-09-24 19:24:43.759181232 +0000 UTC m=+27.215958918 (delta=78.598587ms)
	I0924 19:24:43.863142   45113 fix.go:200] guest clock delta is within tolerance: 78.598587ms
	I0924 19:24:43.863148   45113 start.go:83] releasing machines lock for "test-preload-184922", held for 17.690763377s
	I0924 19:24:43.863173   45113 main.go:141] libmachine: (test-preload-184922) Calling .DriverName
	I0924 19:24:43.863432   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetIP
	I0924 19:24:43.865813   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.866150   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:24:43.866180   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.866328   45113 main.go:141] libmachine: (test-preload-184922) Calling .DriverName
	I0924 19:24:43.866766   45113 main.go:141] libmachine: (test-preload-184922) Calling .DriverName
	I0924 19:24:43.866894   45113 main.go:141] libmachine: (test-preload-184922) Calling .DriverName
	I0924 19:24:43.866973   45113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:24:43.867028   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHHostname
	I0924 19:24:43.867046   45113 ssh_runner.go:195] Run: cat /version.json
	I0924 19:24:43.867065   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHHostname
	I0924 19:24:43.869706   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.869794   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.870040   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:24:43.870061   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.870085   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:24:43.870096   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:43.870277   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHPort
	I0924 19:24:43.870356   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHPort
	I0924 19:24:43.870444   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHKeyPath
	I0924 19:24:43.870582   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHKeyPath
	I0924 19:24:43.870588   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHUsername
	I0924 19:24:43.870838   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHUsername
	I0924 19:24:43.870822   45113 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/test-preload-184922/id_rsa Username:docker}
	I0924 19:24:43.870958   45113 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/test-preload-184922/id_rsa Username:docker}
	I0924 19:24:43.968003   45113 ssh_runner.go:195] Run: systemctl --version
	I0924 19:24:43.973661   45113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:24:44.125156   45113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:24:44.131433   45113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:24:44.131499   45113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:24:44.147196   45113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:24:44.147215   45113 start.go:495] detecting cgroup driver to use...
	I0924 19:24:44.147263   45113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:24:44.167240   45113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:24:44.182128   45113 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:24:44.182180   45113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:24:44.196863   45113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:24:44.211365   45113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:24:44.324770   45113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:24:44.492957   45113 docker.go:233] disabling docker service ...
	I0924 19:24:44.493036   45113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:24:44.506300   45113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:24:44.518180   45113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:24:44.629685   45113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:24:44.739847   45113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:24:44.753316   45113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:24:44.769828   45113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0924 19:24:44.769892   45113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:24:44.779313   45113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:24:44.779378   45113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:24:44.788843   45113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:24:44.798001   45113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:24:44.807548   45113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:24:44.817162   45113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:24:44.826333   45113 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:24:44.841167   45113 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:24:44.850315   45113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:24:44.858499   45113 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:24:44.858542   45113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:24:44.870610   45113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:24:44.879382   45113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:24:44.994378   45113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:24:45.077750   45113 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:24:45.077829   45113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:24:45.082579   45113 start.go:563] Will wait 60s for crictl version
	I0924 19:24:45.082644   45113 ssh_runner.go:195] Run: which crictl
	I0924 19:24:45.086065   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:24:45.120217   45113 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:24:45.120310   45113 ssh_runner.go:195] Run: crio --version
	I0924 19:24:45.145443   45113 ssh_runner.go:195] Run: crio --version
	I0924 19:24:45.173353   45113 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0924 19:24:45.174777   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetIP
	I0924 19:24:45.177215   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:45.177510   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:24:45.177539   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:24:45.177700   45113 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 19:24:45.181531   45113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:24:45.193109   45113 kubeadm.go:883] updating cluster {Name:test-preload-184922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-184922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:24:45.193229   45113 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0924 19:24:45.193287   45113 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:24:45.224560   45113 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0924 19:24:45.224626   45113 ssh_runner.go:195] Run: which lz4
	I0924 19:24:45.228338   45113 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:24:45.232326   45113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:24:45.232357   45113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0924 19:24:46.539062   45113 crio.go:462] duration metric: took 1.310749802s to copy over tarball
	I0924 19:24:46.539137   45113 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:24:48.809020   45113 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.269851807s)
	I0924 19:24:48.809048   45113 crio.go:469] duration metric: took 2.269959412s to extract the tarball
	I0924 19:24:48.809056   45113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:24:48.848726   45113 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:24:48.886344   45113 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0924 19:24:48.886366   45113 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 19:24:48.886417   45113 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:24:48.886465   45113 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0924 19:24:48.886501   45113 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0924 19:24:48.886523   45113 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0924 19:24:48.886543   45113 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0924 19:24:48.886503   45113 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0924 19:24:48.886547   45113 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 19:24:48.886478   45113 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0924 19:24:48.887740   45113 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0924 19:24:48.887774   45113 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0924 19:24:48.887805   45113 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 19:24:48.887814   45113 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:24:48.887743   45113 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0924 19:24:48.887744   45113 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0924 19:24:48.887744   45113 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0924 19:24:48.887743   45113 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0924 19:24:49.024852   45113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0924 19:24:49.031265   45113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0924 19:24:49.032176   45113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0924 19:24:49.032877   45113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0924 19:24:49.042897   45113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0924 19:24:49.047133   45113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0924 19:24:49.058211   45113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0924 19:24:49.088533   45113 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0924 19:24:49.088582   45113 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0924 19:24:49.088627   45113 ssh_runner.go:195] Run: which crictl
	I0924 19:24:49.130247   45113 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0924 19:24:49.130296   45113 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0924 19:24:49.130347   45113 ssh_runner.go:195] Run: which crictl
	I0924 19:24:49.134402   45113 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0924 19:24:49.134438   45113 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 19:24:49.134475   45113 ssh_runner.go:195] Run: which crictl
	I0924 19:24:49.174804   45113 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0924 19:24:49.174859   45113 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0924 19:24:49.174889   45113 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0924 19:24:49.174931   45113 ssh_runner.go:195] Run: which crictl
	I0924 19:24:49.174866   45113 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0924 19:24:49.174991   45113 ssh_runner.go:195] Run: which crictl
	I0924 19:24:49.179253   45113 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0924 19:24:49.179285   45113 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0924 19:24:49.179324   45113 ssh_runner.go:195] Run: which crictl
	I0924 19:24:49.179328   45113 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0924 19:24:49.179353   45113 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0924 19:24:49.179365   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0924 19:24:49.179391   45113 ssh_runner.go:195] Run: which crictl
	I0924 19:24:49.179449   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0924 19:24:49.179483   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0924 19:24:49.181460   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0924 19:24:49.182242   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0924 19:24:49.192259   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0924 19:24:49.287399   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0924 19:24:49.287474   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0924 19:24:49.287502   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0924 19:24:49.287536   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0924 19:24:49.287478   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0924 19:24:49.304515   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0924 19:24:49.304545   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0924 19:24:49.429058   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0924 19:24:49.429092   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0924 19:24:49.429140   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0924 19:24:49.429188   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0924 19:24:49.429226   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0924 19:24:49.437426   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0924 19:24:49.443116   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0924 19:24:49.574635   45113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0924 19:24:49.574655   45113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0924 19:24:49.574690   45113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0924 19:24:49.574723   45113 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0924 19:24:49.574757   45113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0924 19:24:49.574764   45113 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0924 19:24:49.574796   45113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0924 19:24:49.574857   45113 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0924 19:24:49.574869   45113 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0924 19:24:49.574871   45113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0924 19:24:49.574912   45113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0924 19:24:49.574933   45113 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0924 19:24:49.574975   45113 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0924 19:24:49.612181   45113 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0924 19:24:49.612206   45113 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0924 19:24:49.612251   45113 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0924 19:24:49.612258   45113 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0924 19:24:49.612272   45113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0924 19:24:49.612303   45113 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0924 19:24:49.612343   45113 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0924 19:24:49.612373   45113 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0924 19:24:49.612375   45113 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0924 19:24:49.612418   45113 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0924 19:24:49.937242   45113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:24:52.661500   45113 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.04921051s)
	I0924 19:24:52.661535   45113 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0924 19:24:52.661566   45113 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0924 19:24:52.661574   45113 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.049174601s)
	I0924 19:24:52.661605   45113 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0924 19:24:52.661616   45113 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0924 19:24:52.661660   45113 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.724385874s)
	I0924 19:24:54.815423   45113 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.153786743s)
	I0924 19:24:54.815456   45113 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0924 19:24:54.815484   45113 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0924 19:24:54.815534   45113 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0924 19:24:55.154654   45113 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0924 19:24:55.154704   45113 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0924 19:24:55.154761   45113 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0924 19:24:55.593992   45113 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0924 19:24:55.594047   45113 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0924 19:24:55.594128   45113 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0924 19:24:55.730869   45113 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0924 19:24:55.730924   45113 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0924 19:24:55.730975   45113 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0924 19:24:56.372300   45113 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0924 19:24:56.372337   45113 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0924 19:24:56.372384   45113 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0924 19:24:57.213309   45113 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0924 19:24:57.213353   45113 cache_images.go:123] Successfully loaded all cached images
	I0924 19:24:57.213360   45113 cache_images.go:92] duration metric: took 8.326982434s to LoadCachedImages
	I0924 19:24:57.213375   45113 kubeadm.go:934] updating node { 192.168.39.144 8443 v1.24.4 crio true true} ...
	I0924 19:24:57.213499   45113 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-184922 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-184922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:24:57.213583   45113 ssh_runner.go:195] Run: crio config
	I0924 19:24:57.264405   45113 cni.go:84] Creating CNI manager for ""
	I0924 19:24:57.264422   45113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:24:57.264431   45113 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:24:57.264448   45113 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-184922 NodeName:test-preload-184922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:24:57.264566   45113 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-184922"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:24:57.264632   45113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0924 19:24:57.273740   45113 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:24:57.273804   45113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:24:57.283075   45113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0924 19:24:57.298764   45113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:24:57.314001   45113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0924 19:24:57.329409   45113 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I0924 19:24:57.332833   45113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:24:57.343921   45113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:24:57.455313   45113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:24:57.470991   45113 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/test-preload-184922 for IP: 192.168.39.144
	I0924 19:24:57.471014   45113 certs.go:194] generating shared ca certs ...
	I0924 19:24:57.471034   45113 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:24:57.471181   45113 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:24:57.471232   45113 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:24:57.471245   45113 certs.go:256] generating profile certs ...
	I0924 19:24:57.471322   45113 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/test-preload-184922/client.key
	I0924 19:24:57.471396   45113 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/test-preload-184922/apiserver.key.e0eb44f0
	I0924 19:24:57.471447   45113 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/test-preload-184922/proxy-client.key
	I0924 19:24:57.471577   45113 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:24:57.471615   45113 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:24:57.471624   45113 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:24:57.471658   45113 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:24:57.471694   45113 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:24:57.471725   45113 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:24:57.471778   45113 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:24:57.472451   45113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:24:57.510562   45113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:24:57.538853   45113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:24:57.575409   45113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:24:57.608440   45113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/test-preload-184922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0924 19:24:57.631353   45113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/test-preload-184922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 19:24:57.659786   45113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/test-preload-184922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:24:57.680692   45113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/test-preload-184922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:24:57.702695   45113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:24:57.725517   45113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:24:57.748055   45113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:24:57.769597   45113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:24:57.784610   45113 ssh_runner.go:195] Run: openssl version
	I0924 19:24:57.789879   45113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:24:57.799667   45113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:24:57.803668   45113 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:24:57.803709   45113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:24:57.808849   45113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:24:57.818682   45113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:24:57.828355   45113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:24:57.832616   45113 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:24:57.832675   45113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:24:57.837898   45113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:24:57.847325   45113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:24:57.856874   45113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:24:57.860973   45113 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:24:57.861011   45113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:24:57.865993   45113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:24:57.875057   45113 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:24:57.878944   45113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:24:57.884360   45113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:24:57.889627   45113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:24:57.895182   45113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:24:57.900468   45113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:24:57.905861   45113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:24:57.911435   45113 kubeadm.go:392] StartCluster: {Name:test-preload-184922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-184922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:24:57.911519   45113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:24:57.911555   45113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:24:57.947688   45113 cri.go:89] found id: ""
	I0924 19:24:57.947770   45113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:24:57.957146   45113 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:24:57.957165   45113 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:24:57.957209   45113 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:24:57.965873   45113 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:24:57.966299   45113 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-184922" does not appear in /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:24:57.966413   45113 kubeconfig.go:62] /home/jenkins/minikube-integration/19700-3751/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-184922" cluster setting kubeconfig missing "test-preload-184922" context setting]
	I0924 19:24:57.966705   45113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:24:57.967325   45113 kapi.go:59] client config for test-preload-184922: &rest.Config{Host:"https://192.168.39.144:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/test-preload-184922/client.crt", KeyFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/test-preload-184922/client.key", CAFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 19:24:57.967906   45113 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:24:57.976027   45113 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.144
	I0924 19:24:57.976056   45113 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:24:57.976067   45113 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:24:57.976113   45113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:24:58.008492   45113 cri.go:89] found id: ""
	I0924 19:24:58.008563   45113 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:24:58.023475   45113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:24:58.032299   45113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:24:58.032317   45113 kubeadm.go:157] found existing configuration files:
	
	I0924 19:24:58.032365   45113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:24:58.040488   45113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:24:58.040536   45113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:24:58.048714   45113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:24:58.056476   45113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:24:58.056528   45113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:24:58.064969   45113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:24:58.072860   45113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:24:58.072901   45113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:24:58.081116   45113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:24:58.089122   45113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:24:58.089179   45113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:24:58.097734   45113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:24:58.106164   45113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:24:58.184910   45113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:24:58.977042   45113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:24:59.213119   45113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:24:59.266494   45113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:24:59.343897   45113 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:24:59.343985   45113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:24:59.844727   45113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:25:00.344338   45113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:25:00.361709   45113 api_server.go:72] duration metric: took 1.01781268s to wait for apiserver process to appear ...
	I0924 19:25:00.361733   45113 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:25:00.361750   45113 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0924 19:25:00.362236   45113 api_server.go:269] stopped: https://192.168.39.144:8443/healthz: Get "https://192.168.39.144:8443/healthz": dial tcp 192.168.39.144:8443: connect: connection refused
	I0924 19:25:00.862044   45113 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0924 19:25:03.667519   45113 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:25:03.667548   45113 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:25:03.667566   45113 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0924 19:25:03.679717   45113 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:25:03.679751   45113 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:25:03.862018   45113 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0924 19:25:03.868616   45113 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:25:03.868643   45113 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:25:04.362161   45113 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0924 19:25:04.367446   45113 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:25:04.367480   45113 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:25:04.862011   45113 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0924 19:25:04.866852   45113 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0924 19:25:04.872698   45113 api_server.go:141] control plane version: v1.24.4
	I0924 19:25:04.872725   45113 api_server.go:131] duration metric: took 4.510985627s to wait for apiserver health ...
	I0924 19:25:04.872735   45113 cni.go:84] Creating CNI manager for ""
	I0924 19:25:04.872743   45113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:25:04.874705   45113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:25:04.876104   45113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:25:04.886234   45113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:25:04.902435   45113 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:25:04.902500   45113 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0924 19:25:04.902515   45113 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0924 19:25:04.912943   45113 system_pods.go:59] 8 kube-system pods found
	I0924 19:25:04.912986   45113 system_pods.go:61] "coredns-6d4b75cb6d-lcbpv" [ef5c72b0-8ec8-4d1b-ad4d-5a14259b9c6a] Running
	I0924 19:25:04.912995   45113 system_pods.go:61] "coredns-6d4b75cb6d-xf2r4" [7d38f546-04ea-4b98-87bc-0d9c0b7da9e3] Running
	I0924 19:25:04.913000   45113 system_pods.go:61] "etcd-test-preload-184922" [6b1b65ec-a565-4338-abaf-8b49a8b5ce1e] Running
	I0924 19:25:04.913006   45113 system_pods.go:61] "kube-apiserver-test-preload-184922" [fe554d98-9035-40dd-8b22-457ff0602bc2] Running
	I0924 19:25:04.913021   45113 system_pods.go:61] "kube-controller-manager-test-preload-184922" [69960295-63f0-4bb1-8c97-1d66271bed4e] Running
	I0924 19:25:04.913030   45113 system_pods.go:61] "kube-proxy-ns6gn" [92d91b23-c0ed-4898-8d11-5e136db42882] Running
	I0924 19:25:04.913038   45113 system_pods.go:61] "kube-scheduler-test-preload-184922" [83e172b4-b37e-4925-b857-3c934f68f910] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:25:04.913046   45113 system_pods.go:61] "storage-provisioner" [c5b4beb3-de06-45cd-b3a7-8950b5b4da65] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 19:25:04.913058   45113 system_pods.go:74] duration metric: took 10.602914ms to wait for pod list to return data ...
	I0924 19:25:04.913072   45113 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:25:04.917240   45113 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:25:04.917263   45113 node_conditions.go:123] node cpu capacity is 2
	I0924 19:25:04.917271   45113 node_conditions.go:105] duration metric: took 4.194556ms to run NodePressure ...
	I0924 19:25:04.917285   45113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:25:05.092380   45113 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:25:05.096056   45113 kubeadm.go:739] kubelet initialised
	I0924 19:25:05.096076   45113 kubeadm.go:740] duration metric: took 3.668305ms waiting for restarted kubelet to initialise ...
	I0924 19:25:05.096083   45113 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:25:05.101758   45113 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-lcbpv" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:05.106309   45113 pod_ready.go:98] node "test-preload-184922" hosting pod "coredns-6d4b75cb6d-lcbpv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:05.106334   45113 pod_ready.go:82] duration metric: took 4.545392ms for pod "coredns-6d4b75cb6d-lcbpv" in "kube-system" namespace to be "Ready" ...
	E0924 19:25:05.106346   45113 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-184922" hosting pod "coredns-6d4b75cb6d-lcbpv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:05.106355   45113 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-xf2r4" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:05.110723   45113 pod_ready.go:98] node "test-preload-184922" hosting pod "coredns-6d4b75cb6d-xf2r4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:05.110746   45113 pod_ready.go:82] duration metric: took 4.376053ms for pod "coredns-6d4b75cb6d-xf2r4" in "kube-system" namespace to be "Ready" ...
	E0924 19:25:05.110756   45113 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-184922" hosting pod "coredns-6d4b75cb6d-xf2r4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:05.110763   45113 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-184922" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:05.114653   45113 pod_ready.go:98] node "test-preload-184922" hosting pod "etcd-test-preload-184922" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:05.114673   45113 pod_ready.go:82] duration metric: took 3.900401ms for pod "etcd-test-preload-184922" in "kube-system" namespace to be "Ready" ...
	E0924 19:25:05.114682   45113 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-184922" hosting pod "etcd-test-preload-184922" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:05.114689   45113 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-184922" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:05.306307   45113 pod_ready.go:98] node "test-preload-184922" hosting pod "kube-apiserver-test-preload-184922" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:05.306337   45113 pod_ready.go:82] duration metric: took 191.636648ms for pod "kube-apiserver-test-preload-184922" in "kube-system" namespace to be "Ready" ...
	E0924 19:25:05.306349   45113 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-184922" hosting pod "kube-apiserver-test-preload-184922" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:05.306357   45113 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-184922" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:05.706014   45113 pod_ready.go:98] node "test-preload-184922" hosting pod "kube-controller-manager-test-preload-184922" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:05.706046   45113 pod_ready.go:82] duration metric: took 399.676865ms for pod "kube-controller-manager-test-preload-184922" in "kube-system" namespace to be "Ready" ...
	E0924 19:25:05.706059   45113 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-184922" hosting pod "kube-controller-manager-test-preload-184922" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:05.706068   45113 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ns6gn" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:06.105862   45113 pod_ready.go:98] node "test-preload-184922" hosting pod "kube-proxy-ns6gn" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:06.105891   45113 pod_ready.go:82] duration metric: took 399.810714ms for pod "kube-proxy-ns6gn" in "kube-system" namespace to be "Ready" ...
	E0924 19:25:06.105908   45113 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-184922" hosting pod "kube-proxy-ns6gn" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:06.105916   45113 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-184922" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:06.506637   45113 pod_ready.go:98] node "test-preload-184922" hosting pod "kube-scheduler-test-preload-184922" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:06.506666   45113 pod_ready.go:82] duration metric: took 400.741453ms for pod "kube-scheduler-test-preload-184922" in "kube-system" namespace to be "Ready" ...
	E0924 19:25:06.506678   45113 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-184922" hosting pod "kube-scheduler-test-preload-184922" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:06.506687   45113 pod_ready.go:39] duration metric: took 1.410596354s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:25:06.506708   45113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:25:06.517628   45113 ops.go:34] apiserver oom_adj: -16
	I0924 19:25:06.517656   45113 kubeadm.go:597] duration metric: took 8.560483696s to restartPrimaryControlPlane
	I0924 19:25:06.517667   45113 kubeadm.go:394] duration metric: took 8.606237215s to StartCluster
	I0924 19:25:06.517685   45113 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:25:06.517781   45113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:25:06.518367   45113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:25:06.518593   45113 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:25:06.518671   45113 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:25:06.518767   45113 addons.go:69] Setting storage-provisioner=true in profile "test-preload-184922"
	I0924 19:25:06.518785   45113 addons.go:234] Setting addon storage-provisioner=true in "test-preload-184922"
	I0924 19:25:06.518783   45113 addons.go:69] Setting default-storageclass=true in profile "test-preload-184922"
	W0924 19:25:06.518792   45113 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:25:06.518803   45113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-184922"
	I0924 19:25:06.518848   45113 host.go:66] Checking if "test-preload-184922" exists ...
	I0924 19:25:06.518858   45113 config.go:182] Loaded profile config "test-preload-184922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0924 19:25:06.519170   45113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:25:06.519212   45113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:25:06.519255   45113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:25:06.519297   45113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:25:06.520252   45113 out.go:177] * Verifying Kubernetes components...
	I0924 19:25:06.521655   45113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:25:06.534060   45113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35575
	I0924 19:25:06.534182   45113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I0924 19:25:06.534552   45113 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:25:06.534648   45113 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:25:06.535049   45113 main.go:141] libmachine: Using API Version  1
	I0924 19:25:06.535069   45113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:25:06.535189   45113 main.go:141] libmachine: Using API Version  1
	I0924 19:25:06.535212   45113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:25:06.535380   45113 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:25:06.535531   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetState
	I0924 19:25:06.535532   45113 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:25:06.536103   45113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:25:06.536147   45113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:25:06.537723   45113 kapi.go:59] client config for test-preload-184922: &rest.Config{Host:"https://192.168.39.144:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/test-preload-184922/client.crt", KeyFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/test-preload-184922/client.key", CAFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 19:25:06.537965   45113 addons.go:234] Setting addon default-storageclass=true in "test-preload-184922"
	W0924 19:25:06.537982   45113 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:25:06.538003   45113 host.go:66] Checking if "test-preload-184922" exists ...
	I0924 19:25:06.538240   45113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:25:06.538274   45113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:25:06.552016   45113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34257
	I0924 19:25:06.552467   45113 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:25:06.552943   45113 main.go:141] libmachine: Using API Version  1
	I0924 19:25:06.552968   45113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:25:06.553018   45113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0924 19:25:06.553324   45113 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:25:06.553440   45113 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:25:06.553936   45113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:25:06.553977   45113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:25:06.554591   45113 main.go:141] libmachine: Using API Version  1
	I0924 19:25:06.554615   45113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:25:06.554974   45113 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:25:06.555183   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetState
	I0924 19:25:06.556897   45113 main.go:141] libmachine: (test-preload-184922) Calling .DriverName
	I0924 19:25:06.558926   45113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:25:06.560513   45113 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:25:06.560532   45113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:25:06.560550   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHHostname
	I0924 19:25:06.563177   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:25:06.563574   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:25:06.563616   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:25:06.563724   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHPort
	I0924 19:25:06.563887   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHKeyPath
	I0924 19:25:06.564018   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHUsername
	I0924 19:25:06.564144   45113 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/test-preload-184922/id_rsa Username:docker}
	I0924 19:25:06.585967   45113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40399
	I0924 19:25:06.586360   45113 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:25:06.586869   45113 main.go:141] libmachine: Using API Version  1
	I0924 19:25:06.586892   45113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:25:06.587218   45113 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:25:06.587418   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetState
	I0924 19:25:06.589164   45113 main.go:141] libmachine: (test-preload-184922) Calling .DriverName
	I0924 19:25:06.589367   45113 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:25:06.589383   45113 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:25:06.589401   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHHostname
	I0924 19:25:06.592016   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:25:06.592502   45113 main.go:141] libmachine: (test-preload-184922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:63:3e", ip: ""} in network mk-test-preload-184922: {Iface:virbr1 ExpiryTime:2024-09-24 20:24:36 +0000 UTC Type:0 Mac:52:54:00:79:63:3e Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-184922 Clientid:01:52:54:00:79:63:3e}
	I0924 19:25:06.592529   45113 main.go:141] libmachine: (test-preload-184922) DBG | domain test-preload-184922 has defined IP address 192.168.39.144 and MAC address 52:54:00:79:63:3e in network mk-test-preload-184922
	I0924 19:25:06.592676   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHPort
	I0924 19:25:06.592858   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHKeyPath
	I0924 19:25:06.593006   45113 main.go:141] libmachine: (test-preload-184922) Calling .GetSSHUsername
	I0924 19:25:06.593155   45113 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/test-preload-184922/id_rsa Username:docker}
	I0924 19:25:06.680852   45113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:25:06.697889   45113 node_ready.go:35] waiting up to 6m0s for node "test-preload-184922" to be "Ready" ...
	I0924 19:25:06.772145   45113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:25:06.792085   45113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:25:07.622980   45113 main.go:141] libmachine: Making call to close driver server
	I0924 19:25:07.623002   45113 main.go:141] libmachine: (test-preload-184922) Calling .Close
	I0924 19:25:07.623107   45113 main.go:141] libmachine: Making call to close driver server
	I0924 19:25:07.623132   45113 main.go:141] libmachine: (test-preload-184922) Calling .Close
	I0924 19:25:07.623279   45113 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:25:07.623295   45113 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:25:07.623301   45113 main.go:141] libmachine: (test-preload-184922) DBG | Closing plugin on server side
	I0924 19:25:07.623304   45113 main.go:141] libmachine: Making call to close driver server
	I0924 19:25:07.623312   45113 main.go:141] libmachine: (test-preload-184922) Calling .Close
	I0924 19:25:07.623375   45113 main.go:141] libmachine: (test-preload-184922) DBG | Closing plugin on server side
	I0924 19:25:07.623383   45113 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:25:07.623390   45113 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:25:07.623395   45113 main.go:141] libmachine: Making call to close driver server
	I0924 19:25:07.623401   45113 main.go:141] libmachine: (test-preload-184922) Calling .Close
	I0924 19:25:07.623595   45113 main.go:141] libmachine: (test-preload-184922) DBG | Closing plugin on server side
	I0924 19:25:07.623647   45113 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:25:07.623677   45113 main.go:141] libmachine: (test-preload-184922) DBG | Closing plugin on server side
	I0924 19:25:07.623684   45113 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:25:07.623655   45113 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:25:07.623696   45113 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:25:07.630086   45113 main.go:141] libmachine: Making call to close driver server
	I0924 19:25:07.630101   45113 main.go:141] libmachine: (test-preload-184922) Calling .Close
	I0924 19:25:07.630331   45113 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:25:07.630350   45113 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:25:07.630363   45113 main.go:141] libmachine: (test-preload-184922) DBG | Closing plugin on server side
	I0924 19:25:07.632265   45113 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0924 19:25:07.633608   45113 addons.go:510] duration metric: took 1.114943999s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0924 19:25:08.701435   45113 node_ready.go:53] node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:11.201520   45113 node_ready.go:53] node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:13.202017   45113 node_ready.go:53] node "test-preload-184922" has status "Ready":"False"
	I0924 19:25:14.201722   45113 node_ready.go:49] node "test-preload-184922" has status "Ready":"True"
	I0924 19:25:14.201747   45113 node_ready.go:38] duration metric: took 7.503817174s for node "test-preload-184922" to be "Ready" ...
	I0924 19:25:14.201757   45113 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:25:14.207399   45113 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-xf2r4" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:14.212109   45113 pod_ready.go:93] pod "coredns-6d4b75cb6d-xf2r4" in "kube-system" namespace has status "Ready":"True"
	I0924 19:25:14.212132   45113 pod_ready.go:82] duration metric: took 4.704667ms for pod "coredns-6d4b75cb6d-xf2r4" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:14.212140   45113 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-184922" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:15.720380   45113 pod_ready.go:93] pod "etcd-test-preload-184922" in "kube-system" namespace has status "Ready":"True"
	I0924 19:25:15.720410   45113 pod_ready.go:82] duration metric: took 1.50825904s for pod "etcd-test-preload-184922" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:15.720420   45113 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-184922" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:15.725857   45113 pod_ready.go:93] pod "kube-apiserver-test-preload-184922" in "kube-system" namespace has status "Ready":"True"
	I0924 19:25:15.725878   45113 pod_ready.go:82] duration metric: took 5.452465ms for pod "kube-apiserver-test-preload-184922" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:15.725886   45113 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-184922" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:15.730690   45113 pod_ready.go:93] pod "kube-controller-manager-test-preload-184922" in "kube-system" namespace has status "Ready":"True"
	I0924 19:25:15.730706   45113 pod_ready.go:82] duration metric: took 4.813475ms for pod "kube-controller-manager-test-preload-184922" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:15.730717   45113 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ns6gn" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:15.802219   45113 pod_ready.go:93] pod "kube-proxy-ns6gn" in "kube-system" namespace has status "Ready":"True"
	I0924 19:25:15.802244   45113 pod_ready.go:82] duration metric: took 71.520713ms for pod "kube-proxy-ns6gn" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:15.802253   45113 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-184922" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:16.203453   45113 pod_ready.go:93] pod "kube-scheduler-test-preload-184922" in "kube-system" namespace has status "Ready":"True"
	I0924 19:25:16.203481   45113 pod_ready.go:82] duration metric: took 401.220936ms for pod "kube-scheduler-test-preload-184922" in "kube-system" namespace to be "Ready" ...
	I0924 19:25:16.203492   45113 pod_ready.go:39] duration metric: took 2.001725612s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:25:16.203511   45113 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:25:16.203567   45113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:25:16.217217   45113 api_server.go:72] duration metric: took 9.698591676s to wait for apiserver process to appear ...
	I0924 19:25:16.217243   45113 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:25:16.217264   45113 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0924 19:25:16.222157   45113 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0924 19:25:16.223189   45113 api_server.go:141] control plane version: v1.24.4
	I0924 19:25:16.223209   45113 api_server.go:131] duration metric: took 5.958688ms to wait for apiserver health ...
	I0924 19:25:16.223216   45113 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:25:16.403720   45113 system_pods.go:59] 7 kube-system pods found
	I0924 19:25:16.403752   45113 system_pods.go:61] "coredns-6d4b75cb6d-xf2r4" [7d38f546-04ea-4b98-87bc-0d9c0b7da9e3] Running
	I0924 19:25:16.403757   45113 system_pods.go:61] "etcd-test-preload-184922" [6b1b65ec-a565-4338-abaf-8b49a8b5ce1e] Running
	I0924 19:25:16.403761   45113 system_pods.go:61] "kube-apiserver-test-preload-184922" [fe554d98-9035-40dd-8b22-457ff0602bc2] Running
	I0924 19:25:16.403764   45113 system_pods.go:61] "kube-controller-manager-test-preload-184922" [69960295-63f0-4bb1-8c97-1d66271bed4e] Running
	I0924 19:25:16.403768   45113 system_pods.go:61] "kube-proxy-ns6gn" [92d91b23-c0ed-4898-8d11-5e136db42882] Running
	I0924 19:25:16.403771   45113 system_pods.go:61] "kube-scheduler-test-preload-184922" [83e172b4-b37e-4925-b857-3c934f68f910] Running
	I0924 19:25:16.403773   45113 system_pods.go:61] "storage-provisioner" [c5b4beb3-de06-45cd-b3a7-8950b5b4da65] Running
	I0924 19:25:16.403779   45113 system_pods.go:74] duration metric: took 180.55778ms to wait for pod list to return data ...
	I0924 19:25:16.403785   45113 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:25:16.600690   45113 default_sa.go:45] found service account: "default"
	I0924 19:25:16.600715   45113 default_sa.go:55] duration metric: took 196.923842ms for default service account to be created ...
	I0924 19:25:16.600723   45113 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:25:16.804078   45113 system_pods.go:86] 7 kube-system pods found
	I0924 19:25:16.804105   45113 system_pods.go:89] "coredns-6d4b75cb6d-xf2r4" [7d38f546-04ea-4b98-87bc-0d9c0b7da9e3] Running
	I0924 19:25:16.804110   45113 system_pods.go:89] "etcd-test-preload-184922" [6b1b65ec-a565-4338-abaf-8b49a8b5ce1e] Running
	I0924 19:25:16.804114   45113 system_pods.go:89] "kube-apiserver-test-preload-184922" [fe554d98-9035-40dd-8b22-457ff0602bc2] Running
	I0924 19:25:16.804118   45113 system_pods.go:89] "kube-controller-manager-test-preload-184922" [69960295-63f0-4bb1-8c97-1d66271bed4e] Running
	I0924 19:25:16.804121   45113 system_pods.go:89] "kube-proxy-ns6gn" [92d91b23-c0ed-4898-8d11-5e136db42882] Running
	I0924 19:25:16.804124   45113 system_pods.go:89] "kube-scheduler-test-preload-184922" [83e172b4-b37e-4925-b857-3c934f68f910] Running
	I0924 19:25:16.804133   45113 system_pods.go:89] "storage-provisioner" [c5b4beb3-de06-45cd-b3a7-8950b5b4da65] Running
	I0924 19:25:16.804139   45113 system_pods.go:126] duration metric: took 203.411383ms to wait for k8s-apps to be running ...
	I0924 19:25:16.804145   45113 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:25:16.804205   45113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:25:16.818029   45113 system_svc.go:56] duration metric: took 13.873525ms WaitForService to wait for kubelet
	I0924 19:25:16.818059   45113 kubeadm.go:582] duration metric: took 10.299439679s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:25:16.818082   45113 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:25:17.001428   45113 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:25:17.001453   45113 node_conditions.go:123] node cpu capacity is 2
	I0924 19:25:17.001462   45113 node_conditions.go:105] duration metric: took 183.375633ms to run NodePressure ...
	I0924 19:25:17.001473   45113 start.go:241] waiting for startup goroutines ...
	I0924 19:25:17.001480   45113 start.go:246] waiting for cluster config update ...
	I0924 19:25:17.001489   45113 start.go:255] writing updated cluster config ...
	I0924 19:25:17.001740   45113 ssh_runner.go:195] Run: rm -f paused
	I0924 19:25:17.047709   45113 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I0924 19:25:17.049729   45113 out.go:201] 
	W0924 19:25:17.051246   45113 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0924 19:25:17.052605   45113 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0924 19:25:17.053893   45113 out.go:177] * Done! kubectl is now configured to use "test-preload-184922" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.900162628Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205917900143751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5f8feab-399e-4496-8c22-f897a1a6da3d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.900747150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff701e60-cc01-4bd4-ade1-b0fd1b772497 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.900803999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff701e60-cc01-4bd4-ade1-b0fd1b772497 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.900952391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:549307619edd71a007baeec987bf0ed514ea58bb9d586b7fd8791194efa0686e,PodSandboxId:ed1f30603c26bcabd4c9a32638c5cc4758700f4262e6886e77ed61c16de1d150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727205912393713396,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xf2r4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d38f546-04ea-4b98-87bc-0d9c0b7da9e3,},Annotations:map[string]string{io.kubernetes.container.hash: b280598b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7be66f0da6cfecf99bac40214d050cbd6e54d42691ac753afe908a64f8d99c6,PodSandboxId:921b3ba339bb46633f201df661bc0eb2f1f98f3a4bf70f58f0edf7fed5680897,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727205905359728763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ns6gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 92d91b23-c0ed-4898-8d11-5e136db42882,},Annotations:map[string]string{io.kubernetes.container.hash: 66015662,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5aaf7b4720016559a0ecb5260ca8c9e764036db6e435a494aed7ef4e4f2537b,PodSandboxId:fefc924997e3cc33c9e39d583366bef4f7253a22ec0ed3023e3315dcefa5830a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727205905341927790,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5
b4beb3-de06-45cd-b3a7-8950b5b4da65,},Annotations:map[string]string{io.kubernetes.container.hash: c5da0ab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f64a8061a713040aeb405dc23458a40dd0a3c1b1450e383b72e6aa1712ef156,PodSandboxId:c61ad633b47f1de89ddc06e4c5e2a9823c430206f06751b12526bac9b029a767,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727205900010086523,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-184922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc848421c4d3765738b574607a4cc87,},Anno
tations:map[string]string{io.kubernetes.container.hash: efc3c799,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fdd8b6d0418c3f799373dd991ef6f11424919d7d84f31efa9a72303606b87e3,PodSandboxId:52c6166496b39c1db6d6ffe5b661d66b7814e24baf507e4b54ea0aea499a2fd9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727205900039913546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-184922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b5d8bdc62dae076b2fe91d71220f47,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3d79d548b721a71a28c6ec90c88485586e4f97cb779262adda4c9360b6f5425,PodSandboxId:882e8b1c3d4697d77675309a767befabafbdf8d8fa8bd75a47e5a992f3f918b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727205900043671720,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-184922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557254ad9f84c6abac60715f5b3d793a,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 701f24a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7439d4ac1ecdb5dd3b8dfec0966b51679d1a91c0c037bff3745027ebdf1737,PodSandboxId:f307d230a41eb82bce80fe1605206ef2d201771cc43f637e972bc70405dc2dc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727205899970223711,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-184922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e3b5be0bd5325089de8955597a44efd,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff701e60-cc01-4bd4-ade1-b0fd1b772497 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.935820197Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae958330-7f48-4b2a-9a3d-0fd28c5bdc4c name=/runtime.v1.RuntimeService/Version
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.935891375Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae958330-7f48-4b2a-9a3d-0fd28c5bdc4c name=/runtime.v1.RuntimeService/Version
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.936850677Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fbfd8210-d80f-4beb-ad77-c6412f4e2471 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.937276164Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205917937256492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fbfd8210-d80f-4beb-ad77-c6412f4e2471 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.937758137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a68ace6b-ea48-4337-8aed-816fd36a7c60 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.937818442Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a68ace6b-ea48-4337-8aed-816fd36a7c60 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.937995994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:549307619edd71a007baeec987bf0ed514ea58bb9d586b7fd8791194efa0686e,PodSandboxId:ed1f30603c26bcabd4c9a32638c5cc4758700f4262e6886e77ed61c16de1d150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727205912393713396,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xf2r4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d38f546-04ea-4b98-87bc-0d9c0b7da9e3,},Annotations:map[string]string{io.kubernetes.container.hash: b280598b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7be66f0da6cfecf99bac40214d050cbd6e54d42691ac753afe908a64f8d99c6,PodSandboxId:921b3ba339bb46633f201df661bc0eb2f1f98f3a4bf70f58f0edf7fed5680897,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727205905359728763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ns6gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 92d91b23-c0ed-4898-8d11-5e136db42882,},Annotations:map[string]string{io.kubernetes.container.hash: 66015662,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5aaf7b4720016559a0ecb5260ca8c9e764036db6e435a494aed7ef4e4f2537b,PodSandboxId:fefc924997e3cc33c9e39d583366bef4f7253a22ec0ed3023e3315dcefa5830a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727205905341927790,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5
b4beb3-de06-45cd-b3a7-8950b5b4da65,},Annotations:map[string]string{io.kubernetes.container.hash: c5da0ab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f64a8061a713040aeb405dc23458a40dd0a3c1b1450e383b72e6aa1712ef156,PodSandboxId:c61ad633b47f1de89ddc06e4c5e2a9823c430206f06751b12526bac9b029a767,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727205900010086523,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-184922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc848421c4d3765738b574607a4cc87,},Anno
tations:map[string]string{io.kubernetes.container.hash: efc3c799,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fdd8b6d0418c3f799373dd991ef6f11424919d7d84f31efa9a72303606b87e3,PodSandboxId:52c6166496b39c1db6d6ffe5b661d66b7814e24baf507e4b54ea0aea499a2fd9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727205900039913546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-184922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b5d8bdc62dae076b2fe91d71220f47,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3d79d548b721a71a28c6ec90c88485586e4f97cb779262adda4c9360b6f5425,PodSandboxId:882e8b1c3d4697d77675309a767befabafbdf8d8fa8bd75a47e5a992f3f918b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727205900043671720,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-184922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557254ad9f84c6abac60715f5b3d793a,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 701f24a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7439d4ac1ecdb5dd3b8dfec0966b51679d1a91c0c037bff3745027ebdf1737,PodSandboxId:f307d230a41eb82bce80fe1605206ef2d201771cc43f637e972bc70405dc2dc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727205899970223711,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-184922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e3b5be0bd5325089de8955597a44efd,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a68ace6b-ea48-4337-8aed-816fd36a7c60 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.971923020Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e832fb80-5988-44ef-af21-c7685411c966 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.972006108Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e832fb80-5988-44ef-af21-c7685411c966 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.973238916Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=acb595b2-60db-46e1-b09e-e2ec5d57392b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.973749807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205917973727636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=acb595b2-60db-46e1-b09e-e2ec5d57392b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.974545246Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fe24f2b-6a29-4708-9a8b-58002b7fb40d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.974609871Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fe24f2b-6a29-4708-9a8b-58002b7fb40d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:25:17 test-preload-184922 crio[663]: time="2024-09-24 19:25:17.974767794Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:549307619edd71a007baeec987bf0ed514ea58bb9d586b7fd8791194efa0686e,PodSandboxId:ed1f30603c26bcabd4c9a32638c5cc4758700f4262e6886e77ed61c16de1d150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727205912393713396,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xf2r4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d38f546-04ea-4b98-87bc-0d9c0b7da9e3,},Annotations:map[string]string{io.kubernetes.container.hash: b280598b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7be66f0da6cfecf99bac40214d050cbd6e54d42691ac753afe908a64f8d99c6,PodSandboxId:921b3ba339bb46633f201df661bc0eb2f1f98f3a4bf70f58f0edf7fed5680897,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727205905359728763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ns6gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 92d91b23-c0ed-4898-8d11-5e136db42882,},Annotations:map[string]string{io.kubernetes.container.hash: 66015662,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5aaf7b4720016559a0ecb5260ca8c9e764036db6e435a494aed7ef4e4f2537b,PodSandboxId:fefc924997e3cc33c9e39d583366bef4f7253a22ec0ed3023e3315dcefa5830a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727205905341927790,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5
b4beb3-de06-45cd-b3a7-8950b5b4da65,},Annotations:map[string]string{io.kubernetes.container.hash: c5da0ab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f64a8061a713040aeb405dc23458a40dd0a3c1b1450e383b72e6aa1712ef156,PodSandboxId:c61ad633b47f1de89ddc06e4c5e2a9823c430206f06751b12526bac9b029a767,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727205900010086523,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-184922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc848421c4d3765738b574607a4cc87,},Anno
tations:map[string]string{io.kubernetes.container.hash: efc3c799,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fdd8b6d0418c3f799373dd991ef6f11424919d7d84f31efa9a72303606b87e3,PodSandboxId:52c6166496b39c1db6d6ffe5b661d66b7814e24baf507e4b54ea0aea499a2fd9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727205900039913546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-184922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b5d8bdc62dae076b2fe91d71220f47,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3d79d548b721a71a28c6ec90c88485586e4f97cb779262adda4c9360b6f5425,PodSandboxId:882e8b1c3d4697d77675309a767befabafbdf8d8fa8bd75a47e5a992f3f918b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727205900043671720,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-184922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557254ad9f84c6abac60715f5b3d793a,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 701f24a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7439d4ac1ecdb5dd3b8dfec0966b51679d1a91c0c037bff3745027ebdf1737,PodSandboxId:f307d230a41eb82bce80fe1605206ef2d201771cc43f637e972bc70405dc2dc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727205899970223711,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-184922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e3b5be0bd5325089de8955597a44efd,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fe24f2b-6a29-4708-9a8b-58002b7fb40d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:25:18 test-preload-184922 crio[663]: time="2024-09-24 19:25:18.009191960Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d541124-5278-4ca5-b224-dea1fb442e12 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:25:18 test-preload-184922 crio[663]: time="2024-09-24 19:25:18.009277751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d541124-5278-4ca5-b224-dea1fb442e12 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:25:18 test-preload-184922 crio[663]: time="2024-09-24 19:25:18.010501635Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6953278-b2c6-47ba-b2b4-73c28575414c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:25:18 test-preload-184922 crio[663]: time="2024-09-24 19:25:18.010926986Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727205918010904886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6953278-b2c6-47ba-b2b4-73c28575414c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:25:18 test-preload-184922 crio[663]: time="2024-09-24 19:25:18.011422276Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbf3e4de-e357-4317-a4f3-5cf77eb0cc57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:25:18 test-preload-184922 crio[663]: time="2024-09-24 19:25:18.011473358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbf3e4de-e357-4317-a4f3-5cf77eb0cc57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:25:18 test-preload-184922 crio[663]: time="2024-09-24 19:25:18.011693089Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:549307619edd71a007baeec987bf0ed514ea58bb9d586b7fd8791194efa0686e,PodSandboxId:ed1f30603c26bcabd4c9a32638c5cc4758700f4262e6886e77ed61c16de1d150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727205912393713396,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xf2r4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d38f546-04ea-4b98-87bc-0d9c0b7da9e3,},Annotations:map[string]string{io.kubernetes.container.hash: b280598b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7be66f0da6cfecf99bac40214d050cbd6e54d42691ac753afe908a64f8d99c6,PodSandboxId:921b3ba339bb46633f201df661bc0eb2f1f98f3a4bf70f58f0edf7fed5680897,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727205905359728763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ns6gn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 92d91b23-c0ed-4898-8d11-5e136db42882,},Annotations:map[string]string{io.kubernetes.container.hash: 66015662,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5aaf7b4720016559a0ecb5260ca8c9e764036db6e435a494aed7ef4e4f2537b,PodSandboxId:fefc924997e3cc33c9e39d583366bef4f7253a22ec0ed3023e3315dcefa5830a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727205905341927790,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5
b4beb3-de06-45cd-b3a7-8950b5b4da65,},Annotations:map[string]string{io.kubernetes.container.hash: c5da0ab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f64a8061a713040aeb405dc23458a40dd0a3c1b1450e383b72e6aa1712ef156,PodSandboxId:c61ad633b47f1de89ddc06e4c5e2a9823c430206f06751b12526bac9b029a767,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727205900010086523,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-184922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc848421c4d3765738b574607a4cc87,},Anno
tations:map[string]string{io.kubernetes.container.hash: efc3c799,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fdd8b6d0418c3f799373dd991ef6f11424919d7d84f31efa9a72303606b87e3,PodSandboxId:52c6166496b39c1db6d6ffe5b661d66b7814e24baf507e4b54ea0aea499a2fd9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727205900039913546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-184922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b5d8bdc62dae076b2fe91d71220f47,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3d79d548b721a71a28c6ec90c88485586e4f97cb779262adda4c9360b6f5425,PodSandboxId:882e8b1c3d4697d77675309a767befabafbdf8d8fa8bd75a47e5a992f3f918b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727205900043671720,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-184922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557254ad9f84c6abac60715f5b3d793a,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 701f24a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7439d4ac1ecdb5dd3b8dfec0966b51679d1a91c0c037bff3745027ebdf1737,PodSandboxId:f307d230a41eb82bce80fe1605206ef2d201771cc43f637e972bc70405dc2dc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727205899970223711,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-184922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e3b5be0bd5325089de8955597a44efd,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbf3e4de-e357-4317-a4f3-5cf77eb0cc57 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	549307619edd7       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   ed1f30603c26b       coredns-6d4b75cb6d-xf2r4
	e7be66f0da6cf       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   921b3ba339bb4       kube-proxy-ns6gn
	d5aaf7b472001       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   fefc924997e3c       storage-provisioner
	f3d79d548b721       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   882e8b1c3d469       kube-apiserver-test-preload-184922
	3fdd8b6d0418c       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   18 seconds ago      Running             kube-scheduler            1                   52c6166496b39       kube-scheduler-test-preload-184922
	0f64a8061a713       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   18 seconds ago      Running             etcd                      1                   c61ad633b47f1       etcd-test-preload-184922
	7d7439d4ac1ec       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   1                   f307d230a41eb       kube-controller-manager-test-preload-184922
	
	
	==> coredns [549307619edd71a007baeec987bf0ed514ea58bb9d586b7fd8791194efa0686e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:53272 - 33679 "HINFO IN 1134752182424981043.5863410320399238992. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020919962s
	
	
	==> describe nodes <==
	Name:               test-preload-184922
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-184922
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=test-preload-184922
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T19_23_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 19:23:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-184922
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 19:25:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 19:25:14 +0000   Tue, 24 Sep 2024 19:23:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 19:25:14 +0000   Tue, 24 Sep 2024 19:23:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 19:25:14 +0000   Tue, 24 Sep 2024 19:23:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 19:25:14 +0000   Tue, 24 Sep 2024 19:25:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.144
	  Hostname:    test-preload-184922
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 94c028ecf9ce4fb2882ed45ab36d7b90
	  System UUID:                94c028ec-f9ce-4fb2-882e-d45ab36d7b90
	  Boot ID:                    514e7499-059a-4782-b272-43b25e9a2cf2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-xf2r4                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     76s
	  kube-system                 etcd-test-preload-184922                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         89s
	  kube-system                 kube-apiserver-test-preload-184922             250m (12%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-test-preload-184922    200m (10%)    0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-proxy-ns6gn                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-test-preload-184922             100m (5%)     0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  Starting                 75s                kube-proxy       
	  Normal  NodeHasSufficientMemory  96s (x5 over 96s)  kubelet          Node test-preload-184922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s (x4 over 96s)  kubelet          Node test-preload-184922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s (x4 over 96s)  kubelet          Node test-preload-184922 status is now: NodeHasSufficientPID
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  89s                kubelet          Node test-preload-184922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s                kubelet          Node test-preload-184922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s                kubelet          Node test-preload-184922 status is now: NodeHasSufficientPID
	  Normal  NodeReady                79s                kubelet          Node test-preload-184922 status is now: NodeReady
	  Normal  RegisteredNode           77s                node-controller  Node test-preload-184922 event: Registered Node test-preload-184922 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node test-preload-184922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node test-preload-184922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node test-preload-184922 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-184922 event: Registered Node test-preload-184922 in Controller
	
	
	==> dmesg <==
	[Sep24 19:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047565] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.034975] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.654761] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.711626] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.530395] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.848544] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.064452] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051949] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.197296] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.108955] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.249212] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[ +12.466178] systemd-fstab-generator[982]: Ignoring "noauto" option for root device
	[  +0.056822] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.683782] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[Sep24 19:25] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.287926] systemd-fstab-generator[1739]: Ignoring "noauto" option for root device
	[  +5.651793] kauditd_printk_skb: 59 callbacks suppressed
	
	
	==> etcd [0f64a8061a713040aeb405dc23458a40dd0a3c1b1450e383b72e6aa1712ef156] <==
	{"level":"info","ts":"2024-09-24T19:25:00.304Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"42163c43c38ae515","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-24T19:25:00.307Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-24T19:25:00.308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 switched to configuration voters=(4762059917732013333)"}
	{"level":"info","ts":"2024-09-24T19:25:00.308Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b6240fb2000e40e9","local-member-id":"42163c43c38ae515","added-peer-id":"42163c43c38ae515","added-peer-peer-urls":["https://192.168.39.144:2380"]}
	{"level":"info","ts":"2024-09-24T19:25:00.309Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b6240fb2000e40e9","local-member-id":"42163c43c38ae515","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:25:00.309Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:25:00.315Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-24T19:25:00.315Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"42163c43c38ae515","initial-advertise-peer-urls":["https://192.168.39.144:2380"],"listen-peer-urls":["https://192.168.39.144:2380"],"advertise-client-urls":["https://192.168.39.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T19:25:00.316Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-24T19:25:00.316Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.144:2380"}
	{"level":"info","ts":"2024-09-24T19:25:00.318Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.144:2380"}
	{"level":"info","ts":"2024-09-24T19:25:01.387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-24T19:25:01.387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-24T19:25:01.387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 received MsgPreVoteResp from 42163c43c38ae515 at term 2"}
	{"level":"info","ts":"2024-09-24T19:25:01.387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became candidate at term 3"}
	{"level":"info","ts":"2024-09-24T19:25:01.387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 received MsgVoteResp from 42163c43c38ae515 at term 3"}
	{"level":"info","ts":"2024-09-24T19:25:01.387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became leader at term 3"}
	{"level":"info","ts":"2024-09-24T19:25:01.387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 42163c43c38ae515 elected leader 42163c43c38ae515 at term 3"}
	{"level":"info","ts":"2024-09-24T19:25:01.393Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"42163c43c38ae515","local-member-attributes":"{Name:test-preload-184922 ClientURLs:[https://192.168.39.144:2379]}","request-path":"/0/members/42163c43c38ae515/attributes","cluster-id":"b6240fb2000e40e9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T19:25:01.393Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:25:01.394Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:25:01.395Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.144:2379"}
	{"level":"info","ts":"2024-09-24T19:25:01.395Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T19:25:01.396Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T19:25:01.396Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:25:18 up 0 min,  0 users,  load average: 0.60, 0.18, 0.06
	Linux test-preload-184922 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f3d79d548b721a71a28c6ec90c88485586e4f97cb779262adda4c9360b6f5425] <==
	I0924 19:25:03.676402       1 controller.go:85] Starting OpenAPI controller
	I0924 19:25:03.676472       1 controller.go:85] Starting OpenAPI V3 controller
	I0924 19:25:03.676510       1 naming_controller.go:291] Starting NamingConditionController
	I0924 19:25:03.676870       1 establishing_controller.go:76] Starting EstablishingController
	I0924 19:25:03.677770       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0924 19:25:03.677844       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0924 19:25:03.677906       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0924 19:25:03.712089       1 cache.go:39] Caches are synced for autoregister controller
	I0924 19:25:03.712167       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0924 19:25:03.712349       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0924 19:25:03.713881       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0924 19:25:03.714497       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0924 19:25:03.715042       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0924 19:25:03.722028       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0924 19:25:03.774507       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0924 19:25:04.320776       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0924 19:25:04.615935       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0924 19:25:05.015671       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0924 19:25:05.027871       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0924 19:25:05.058548       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0924 19:25:05.074931       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 19:25:05.080405       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0924 19:25:05.605232       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0924 19:25:16.013046       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 19:25:16.054596       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7d7439d4ac1ecdb5dd3b8dfec0966b51679d1a91c0c037bff3745027ebdf1737] <==
	I0924 19:25:16.000014       1 event.go:294] "Event occurred" object="test-preload-184922" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-184922 event: Registered Node test-preload-184922 in Controller"
	I0924 19:25:16.001897       1 shared_informer.go:262] Caches are synced for attach detach
	I0924 19:25:16.002237       1 shared_informer.go:262] Caches are synced for PVC protection
	I0924 19:25:16.027276       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0924 19:25:16.033763       1 shared_informer.go:262] Caches are synced for namespace
	I0924 19:25:16.039088       1 shared_informer.go:262] Caches are synced for HPA
	I0924 19:25:16.040342       1 shared_informer.go:262] Caches are synced for expand
	I0924 19:25:16.041496       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0924 19:25:16.044379       1 shared_informer.go:262] Caches are synced for endpoint
	I0924 19:25:16.046013       1 shared_informer.go:262] Caches are synced for daemon sets
	I0924 19:25:16.049577       1 shared_informer.go:262] Caches are synced for job
	I0924 19:25:16.059849       1 shared_informer.go:262] Caches are synced for ephemeral
	I0924 19:25:16.061078       1 shared_informer.go:262] Caches are synced for stateful set
	I0924 19:25:16.087354       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0924 19:25:16.102489       1 shared_informer.go:262] Caches are synced for crt configmap
	I0924 19:25:16.106945       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0924 19:25:16.152593       1 shared_informer.go:262] Caches are synced for resource quota
	I0924 19:25:16.152595       1 shared_informer.go:262] Caches are synced for deployment
	I0924 19:25:16.191610       1 shared_informer.go:262] Caches are synced for disruption
	I0924 19:25:16.191722       1 disruption.go:371] Sending events to api server.
	I0924 19:25:16.224166       1 shared_informer.go:262] Caches are synced for resource quota
	I0924 19:25:16.243699       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0924 19:25:16.698376       1 shared_informer.go:262] Caches are synced for garbage collector
	I0924 19:25:16.698542       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0924 19:25:16.698474       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [e7be66f0da6cfecf99bac40214d050cbd6e54d42691ac753afe908a64f8d99c6] <==
	I0924 19:25:05.573931       1 node.go:163] Successfully retrieved node IP: 192.168.39.144
	I0924 19:25:05.574040       1 server_others.go:138] "Detected node IP" address="192.168.39.144"
	I0924 19:25:05.574133       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0924 19:25:05.599298       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0924 19:25:05.599349       1 server_others.go:206] "Using iptables Proxier"
	I0924 19:25:05.599585       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0924 19:25:05.599843       1 server.go:661] "Version info" version="v1.24.4"
	I0924 19:25:05.599864       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:25:05.601162       1 config.go:317] "Starting service config controller"
	I0924 19:25:05.601395       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0924 19:25:05.601419       1 config.go:226] "Starting endpoint slice config controller"
	I0924 19:25:05.601423       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0924 19:25:05.602192       1 config.go:444] "Starting node config controller"
	I0924 19:25:05.602215       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0924 19:25:05.702552       1 shared_informer.go:262] Caches are synced for node config
	I0924 19:25:05.702643       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0924 19:25:05.702660       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [3fdd8b6d0418c3f799373dd991ef6f11424919d7d84f31efa9a72303606b87e3] <==
	I0924 19:25:00.579246       1 serving.go:348] Generated self-signed cert in-memory
	W0924 19:25:03.681292       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 19:25:03.681416       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 19:25:03.681447       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 19:25:03.681473       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 19:25:03.705482       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0924 19:25:03.705560       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:25:03.715274       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0924 19:25:03.715337       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 19:25:03.716042       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0924 19:25:03.716102       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0924 19:25:03.815886       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: I0924 19:25:04.318582    1117 topology_manager.go:200] "Topology Admit Handler"
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: I0924 19:25:04.318637    1117 topology_manager.go:200] "Topology Admit Handler"
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: I0924 19:25:04.318710    1117 topology_manager.go:200] "Topology Admit Handler"
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: E0924 19:25:04.324072    1117 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-xf2r4" podUID=7d38f546-04ea-4b98-87bc-0d9c0b7da9e3
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: I0924 19:25:04.360031    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tb8n\" (UniqueName: \"kubernetes.io/projected/92d91b23-c0ed-4898-8d11-5e136db42882-kube-api-access-4tb8n\") pod \"kube-proxy-ns6gn\" (UID: \"92d91b23-c0ed-4898-8d11-5e136db42882\") " pod="kube-system/kube-proxy-ns6gn"
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: I0924 19:25:04.360593    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlzp6\" (UniqueName: \"kubernetes.io/projected/c5b4beb3-de06-45cd-b3a7-8950b5b4da65-kube-api-access-jlzp6\") pod \"storage-provisioner\" (UID: \"c5b4beb3-de06-45cd-b3a7-8950b5b4da65\") " pod="kube-system/storage-provisioner"
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: I0924 19:25:04.360714    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92d91b23-c0ed-4898-8d11-5e136db42882-xtables-lock\") pod \"kube-proxy-ns6gn\" (UID: \"92d91b23-c0ed-4898-8d11-5e136db42882\") " pod="kube-system/kube-proxy-ns6gn"
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: I0924 19:25:04.360826    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d38f546-04ea-4b98-87bc-0d9c0b7da9e3-config-volume\") pod \"coredns-6d4b75cb6d-xf2r4\" (UID: \"7d38f546-04ea-4b98-87bc-0d9c0b7da9e3\") " pod="kube-system/coredns-6d4b75cb6d-xf2r4"
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: I0924 19:25:04.360963    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92d91b23-c0ed-4898-8d11-5e136db42882-kube-proxy\") pod \"kube-proxy-ns6gn\" (UID: \"92d91b23-c0ed-4898-8d11-5e136db42882\") " pod="kube-system/kube-proxy-ns6gn"
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: I0924 19:25:04.361188    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92d91b23-c0ed-4898-8d11-5e136db42882-lib-modules\") pod \"kube-proxy-ns6gn\" (UID: \"92d91b23-c0ed-4898-8d11-5e136db42882\") " pod="kube-system/kube-proxy-ns6gn"
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: I0924 19:25:04.361371    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrfv7\" (UniqueName: \"kubernetes.io/projected/7d38f546-04ea-4b98-87bc-0d9c0b7da9e3-kube-api-access-xrfv7\") pod \"coredns-6d4b75cb6d-xf2r4\" (UID: \"7d38f546-04ea-4b98-87bc-0d9c0b7da9e3\") " pod="kube-system/coredns-6d4b75cb6d-xf2r4"
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: I0924 19:25:04.361480    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c5b4beb3-de06-45cd-b3a7-8950b5b4da65-tmp\") pod \"storage-provisioner\" (UID: \"c5b4beb3-de06-45cd-b3a7-8950b5b4da65\") " pod="kube-system/storage-provisioner"
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: I0924 19:25:04.361522    1117 reconciler.go:159] "Reconciler: start to sync state"
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: E0924 19:25:04.381807    1117 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: E0924 19:25:04.466074    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: E0924 19:25:04.466730    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7d38f546-04ea-4b98-87bc-0d9c0b7da9e3-config-volume podName:7d38f546-04ea-4b98-87bc-0d9c0b7da9e3 nodeName:}" failed. No retries permitted until 2024-09-24 19:25:04.966693248 +0000 UTC m=+5.759963835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7d38f546-04ea-4b98-87bc-0d9c0b7da9e3-config-volume") pod "coredns-6d4b75cb6d-xf2r4" (UID: "7d38f546-04ea-4b98-87bc-0d9c0b7da9e3") : object "kube-system"/"coredns" not registered
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: E0924 19:25:04.971194    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 24 19:25:04 test-preload-184922 kubelet[1117]: E0924 19:25:04.971254    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7d38f546-04ea-4b98-87bc-0d9c0b7da9e3-config-volume podName:7d38f546-04ea-4b98-87bc-0d9c0b7da9e3 nodeName:}" failed. No retries permitted until 2024-09-24 19:25:05.971240427 +0000 UTC m=+6.764511009 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7d38f546-04ea-4b98-87bc-0d9c0b7da9e3-config-volume") pod "coredns-6d4b75cb6d-xf2r4" (UID: "7d38f546-04ea-4b98-87bc-0d9c0b7da9e3") : object "kube-system"/"coredns" not registered
	Sep 24 19:25:05 test-preload-184922 kubelet[1117]: E0924 19:25:05.984577    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 24 19:25:05 test-preload-184922 kubelet[1117]: E0924 19:25:05.984998    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7d38f546-04ea-4b98-87bc-0d9c0b7da9e3-config-volume podName:7d38f546-04ea-4b98-87bc-0d9c0b7da9e3 nodeName:}" failed. No retries permitted until 2024-09-24 19:25:07.984977841 +0000 UTC m=+8.778248424 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7d38f546-04ea-4b98-87bc-0d9c0b7da9e3-config-volume") pod "coredns-6d4b75cb6d-xf2r4" (UID: "7d38f546-04ea-4b98-87bc-0d9c0b7da9e3") : object "kube-system"/"coredns" not registered
	Sep 24 19:25:06 test-preload-184922 kubelet[1117]: E0924 19:25:06.404964    1117 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-xf2r4" podUID=7d38f546-04ea-4b98-87bc-0d9c0b7da9e3
	Sep 24 19:25:08 test-preload-184922 kubelet[1117]: E0924 19:25:08.001558    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 24 19:25:08 test-preload-184922 kubelet[1117]: E0924 19:25:08.002084    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7d38f546-04ea-4b98-87bc-0d9c0b7da9e3-config-volume podName:7d38f546-04ea-4b98-87bc-0d9c0b7da9e3 nodeName:}" failed. No retries permitted until 2024-09-24 19:25:12.002060413 +0000 UTC m=+12.795330997 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7d38f546-04ea-4b98-87bc-0d9c0b7da9e3-config-volume") pod "coredns-6d4b75cb6d-xf2r4" (UID: "7d38f546-04ea-4b98-87bc-0d9c0b7da9e3") : object "kube-system"/"coredns" not registered
	Sep 24 19:25:08 test-preload-184922 kubelet[1117]: E0924 19:25:08.404790    1117 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-xf2r4" podUID=7d38f546-04ea-4b98-87bc-0d9c0b7da9e3
	Sep 24 19:25:09 test-preload-184922 kubelet[1117]: I0924 19:25:09.411179    1117 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ef5c72b0-8ec8-4d1b-ad4d-5a14259b9c6a path="/var/lib/kubelet/pods/ef5c72b0-8ec8-4d1b-ad4d-5a14259b9c6a/volumes"
	
	
	==> storage-provisioner [d5aaf7b4720016559a0ecb5260ca8c9e764036db6e435a494aed7ef4e4f2537b] <==
	I0924 19:25:05.446759       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-184922 -n test-preload-184922
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-184922 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-184922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-184922
--- FAIL: TestPreload (158.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (365.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-629510 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-629510 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m28.50076227s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-629510] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-629510" primary control-plane node in "kubernetes-upgrade-629510" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 19:27:10.597701   46588 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:27:10.597851   46588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:27:10.597860   46588 out.go:358] Setting ErrFile to fd 2...
	I0924 19:27:10.597867   46588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:27:10.598787   46588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:27:10.599904   46588 out.go:352] Setting JSON to false
	I0924 19:27:10.600997   46588 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4182,"bootTime":1727201849,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 19:27:10.601057   46588 start.go:139] virtualization: kvm guest
	I0924 19:27:10.605342   46588 out.go:177] * [kubernetes-upgrade-629510] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 19:27:10.607121   46588 notify.go:220] Checking for updates...
	I0924 19:27:10.608554   46588 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 19:27:10.611342   46588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 19:27:10.614018   46588 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:27:10.615347   46588 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:27:10.617410   46588 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 19:27:10.620426   46588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 19:27:10.622332   46588 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 19:27:10.662640   46588 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 19:27:10.663766   46588 start.go:297] selected driver: kvm2
	I0924 19:27:10.663796   46588 start.go:901] validating driver "kvm2" against <nil>
	I0924 19:27:10.663811   46588 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 19:27:10.664940   46588 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:27:10.665031   46588 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 19:27:10.684561   46588 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 19:27:10.684620   46588 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 19:27:10.684914   46588 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 19:27:10.684942   46588 cni.go:84] Creating CNI manager for ""
	I0924 19:27:10.684982   46588 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:27:10.684993   46588 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 19:27:10.685051   46588 start.go:340] cluster config:
	{Name:kubernetes-upgrade-629510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-629510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:27:10.685161   46588 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:27:10.687767   46588 out.go:177] * Starting "kubernetes-upgrade-629510" primary control-plane node in "kubernetes-upgrade-629510" cluster
	I0924 19:27:10.689434   46588 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:27:10.689474   46588 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 19:27:10.689492   46588 cache.go:56] Caching tarball of preloaded images
	I0924 19:27:10.689581   46588 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 19:27:10.689591   46588 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 19:27:10.689921   46588 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/config.json ...
	I0924 19:27:10.689944   46588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/config.json: {Name:mkf29b5059ebc309e964a3b2f34ba61b8cd66e34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:27:10.690104   46588 start.go:360] acquireMachinesLock for kubernetes-upgrade-629510: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:27:10.690146   46588 start.go:364] duration metric: took 21.961µs to acquireMachinesLock for "kubernetes-upgrade-629510"
	I0924 19:27:10.690169   46588 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-629510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-629510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:27:10.690249   46588 start.go:125] createHost starting for "" (driver="kvm2")
	I0924 19:27:10.691948   46588 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 19:27:10.692093   46588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:27:10.692128   46588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:27:10.708503   46588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I0924 19:27:10.708891   46588 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:27:10.709376   46588 main.go:141] libmachine: Using API Version  1
	I0924 19:27:10.709395   46588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:27:10.709763   46588 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:27:10.709951   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetMachineName
	I0924 19:27:10.710098   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .DriverName
	I0924 19:27:10.710255   46588 start.go:159] libmachine.API.Create for "kubernetes-upgrade-629510" (driver="kvm2")
	I0924 19:27:10.710287   46588 client.go:168] LocalClient.Create starting
	I0924 19:27:10.710319   46588 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem
	I0924 19:27:10.710352   46588 main.go:141] libmachine: Decoding PEM data...
	I0924 19:27:10.710372   46588 main.go:141] libmachine: Parsing certificate...
	I0924 19:27:10.710430   46588 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem
	I0924 19:27:10.710459   46588 main.go:141] libmachine: Decoding PEM data...
	I0924 19:27:10.710475   46588 main.go:141] libmachine: Parsing certificate...
	I0924 19:27:10.710506   46588 main.go:141] libmachine: Running pre-create checks...
	I0924 19:27:10.710520   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .PreCreateCheck
	I0924 19:27:10.710824   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetConfigRaw
	I0924 19:27:10.711176   46588 main.go:141] libmachine: Creating machine...
	I0924 19:27:10.711189   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .Create
	I0924 19:27:10.711304   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Creating KVM machine...
	I0924 19:27:10.712640   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found existing default KVM network
	I0924 19:27:10.713250   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:10.713134   46646 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b70}
	I0924 19:27:10.713316   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | created network xml: 
	I0924 19:27:10.713337   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | <network>
	I0924 19:27:10.713347   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG |   <name>mk-kubernetes-upgrade-629510</name>
	I0924 19:27:10.713355   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG |   <dns enable='no'/>
	I0924 19:27:10.713362   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG |   
	I0924 19:27:10.713374   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0924 19:27:10.713383   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG |     <dhcp>
	I0924 19:27:10.713391   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0924 19:27:10.713398   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG |     </dhcp>
	I0924 19:27:10.713406   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG |   </ip>
	I0924 19:27:10.713416   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG |   
	I0924 19:27:10.713437   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | </network>
	I0924 19:27:10.713448   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | 
	I0924 19:27:10.719465   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | trying to create private KVM network mk-kubernetes-upgrade-629510 192.168.39.0/24...
	I0924 19:27:10.793539   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | private KVM network mk-kubernetes-upgrade-629510 192.168.39.0/24 created
	I0924 19:27:10.793574   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Setting up store path in /home/jenkins/minikube-integration/19700-3751/.minikube/machines/kubernetes-upgrade-629510 ...
	I0924 19:27:10.793584   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:10.793498   46646 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:27:10.793601   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Building disk image from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 19:27:10.793791   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Downloading /home/jenkins/minikube-integration/19700-3751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 19:27:11.085067   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:11.084951   46646 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/kubernetes-upgrade-629510/id_rsa...
	I0924 19:27:11.287386   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:11.287285   46646 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/kubernetes-upgrade-629510/kubernetes-upgrade-629510.rawdisk...
	I0924 19:27:11.287409   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | Writing magic tar header
	I0924 19:27:11.287489   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | Writing SSH key tar header
	I0924 19:27:11.287524   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:11.287440   46646 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/kubernetes-upgrade-629510 ...
	I0924 19:27:11.287610   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/kubernetes-upgrade-629510
	I0924 19:27:11.287633   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines
	I0924 19:27:11.287648   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/kubernetes-upgrade-629510 (perms=drwx------)
	I0924 19:27:11.287663   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:27:11.287679   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751
	I0924 19:27:11.287688   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 19:27:11.287702   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines (perms=drwxr-xr-x)
	I0924 19:27:11.287712   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube (perms=drwxr-xr-x)
	I0924 19:27:11.287720   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | Checking permissions on dir: /home/jenkins
	I0924 19:27:11.287739   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | Checking permissions on dir: /home
	I0924 19:27:11.287752   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751 (perms=drwxrwxr-x)
	I0924 19:27:11.287763   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | Skipping /home - not owner
	I0924 19:27:11.287827   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 19:27:11.287864   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 19:27:11.287878   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Creating domain...
	I0924 19:27:11.288932   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) define libvirt domain using xml: 
	I0924 19:27:11.288949   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) <domain type='kvm'>
	I0924 19:27:11.288959   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)   <name>kubernetes-upgrade-629510</name>
	I0924 19:27:11.288972   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)   <memory unit='MiB'>2200</memory>
	I0924 19:27:11.288984   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)   <vcpu>2</vcpu>
	I0924 19:27:11.288997   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)   <features>
	I0924 19:27:11.289004   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     <acpi/>
	I0924 19:27:11.289007   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     <apic/>
	I0924 19:27:11.289019   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     <pae/>
	I0924 19:27:11.289035   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     
	I0924 19:27:11.289042   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)   </features>
	I0924 19:27:11.289049   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)   <cpu mode='host-passthrough'>
	I0924 19:27:11.289059   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)   
	I0924 19:27:11.289069   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)   </cpu>
	I0924 19:27:11.289090   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)   <os>
	I0924 19:27:11.289108   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     <type>hvm</type>
	I0924 19:27:11.289118   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     <boot dev='cdrom'/>
	I0924 19:27:11.289128   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     <boot dev='hd'/>
	I0924 19:27:11.289140   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     <bootmenu enable='no'/>
	I0924 19:27:11.289152   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)   </os>
	I0924 19:27:11.289165   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)   <devices>
	I0924 19:27:11.289181   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     <disk type='file' device='cdrom'>
	I0924 19:27:11.289199   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/kubernetes-upgrade-629510/boot2docker.iso'/>
	I0924 19:27:11.289210   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)       <target dev='hdc' bus='scsi'/>
	I0924 19:27:11.289222   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)       <readonly/>
	I0924 19:27:11.289230   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     </disk>
	I0924 19:27:11.289241   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     <disk type='file' device='disk'>
	I0924 19:27:11.289254   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 19:27:11.289270   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/kubernetes-upgrade-629510/kubernetes-upgrade-629510.rawdisk'/>
	I0924 19:27:11.289289   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)       <target dev='hda' bus='virtio'/>
	I0924 19:27:11.289298   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     </disk>
	I0924 19:27:11.289309   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     <interface type='network'>
	I0924 19:27:11.289318   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)       <source network='mk-kubernetes-upgrade-629510'/>
	I0924 19:27:11.289326   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)       <model type='virtio'/>
	I0924 19:27:11.289332   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     </interface>
	I0924 19:27:11.289344   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     <interface type='network'>
	I0924 19:27:11.289360   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)       <source network='default'/>
	I0924 19:27:11.289373   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)       <model type='virtio'/>
	I0924 19:27:11.289384   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     </interface>
	I0924 19:27:11.289395   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     <serial type='pty'>
	I0924 19:27:11.289405   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)       <target port='0'/>
	I0924 19:27:11.289421   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     </serial>
	I0924 19:27:11.289434   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     <console type='pty'>
	I0924 19:27:11.289445   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)       <target type='serial' port='0'/>
	I0924 19:27:11.289453   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     </console>
	I0924 19:27:11.289465   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     <rng model='virtio'>
	I0924 19:27:11.289474   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)       <backend model='random'>/dev/random</backend>
	I0924 19:27:11.289485   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     </rng>
	I0924 19:27:11.289494   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     
	I0924 19:27:11.289505   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)     
	I0924 19:27:11.289519   46588 main.go:141] libmachine: (kubernetes-upgrade-629510)   </devices>
	I0924 19:27:11.289534   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) </domain>
	I0924 19:27:11.289555   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) 
	I0924 19:27:11.294256   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:29:d5:e2 in network default
	I0924 19:27:11.295239   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:11.295263   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Ensuring networks are active...
	I0924 19:27:11.296162   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Ensuring network default is active
	I0924 19:27:11.296755   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Ensuring network mk-kubernetes-upgrade-629510 is active
	I0924 19:27:11.297498   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Getting domain xml...
	I0924 19:27:11.298329   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Creating domain...
	I0924 19:27:12.514253   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Waiting to get IP...
	I0924 19:27:12.515215   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:12.515740   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | unable to find current IP address of domain kubernetes-upgrade-629510 in network mk-kubernetes-upgrade-629510
	I0924 19:27:12.515806   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:12.515717   46646 retry.go:31] will retry after 297.405077ms: waiting for machine to come up
	I0924 19:27:12.815574   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:12.816068   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | unable to find current IP address of domain kubernetes-upgrade-629510 in network mk-kubernetes-upgrade-629510
	I0924 19:27:12.816087   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:12.816026   46646 retry.go:31] will retry after 288.969726ms: waiting for machine to come up
	I0924 19:27:13.106616   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:13.107083   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | unable to find current IP address of domain kubernetes-upgrade-629510 in network mk-kubernetes-upgrade-629510
	I0924 19:27:13.107109   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:13.107028   46646 retry.go:31] will retry after 467.060608ms: waiting for machine to come up
	I0924 19:27:13.575196   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:13.575640   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | unable to find current IP address of domain kubernetes-upgrade-629510 in network mk-kubernetes-upgrade-629510
	I0924 19:27:13.575666   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:13.575587   46646 retry.go:31] will retry after 404.46272ms: waiting for machine to come up
	I0924 19:27:13.981173   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:13.981656   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | unable to find current IP address of domain kubernetes-upgrade-629510 in network mk-kubernetes-upgrade-629510
	I0924 19:27:13.981680   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:13.981621   46646 retry.go:31] will retry after 496.425108ms: waiting for machine to come up
	I0924 19:27:14.479332   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:14.479832   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | unable to find current IP address of domain kubernetes-upgrade-629510 in network mk-kubernetes-upgrade-629510
	I0924 19:27:14.479859   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:14.479771   46646 retry.go:31] will retry after 637.719083ms: waiting for machine to come up
	I0924 19:27:15.118481   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:15.118957   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | unable to find current IP address of domain kubernetes-upgrade-629510 in network mk-kubernetes-upgrade-629510
	I0924 19:27:15.119006   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:15.118907   46646 retry.go:31] will retry after 987.936981ms: waiting for machine to come up
	I0924 19:27:16.108546   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:16.108907   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | unable to find current IP address of domain kubernetes-upgrade-629510 in network mk-kubernetes-upgrade-629510
	I0924 19:27:16.108932   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:16.108865   46646 retry.go:31] will retry after 1.033177545s: waiting for machine to come up
	I0924 19:27:17.143987   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:17.144449   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | unable to find current IP address of domain kubernetes-upgrade-629510 in network mk-kubernetes-upgrade-629510
	I0924 19:27:17.144477   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:17.144399   46646 retry.go:31] will retry after 1.394662828s: waiting for machine to come up
	I0924 19:27:18.540958   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:18.541322   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | unable to find current IP address of domain kubernetes-upgrade-629510 in network mk-kubernetes-upgrade-629510
	I0924 19:27:18.541348   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:18.541251   46646 retry.go:31] will retry after 1.644078973s: waiting for machine to come up
	I0924 19:27:20.188006   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:20.188409   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | unable to find current IP address of domain kubernetes-upgrade-629510 in network mk-kubernetes-upgrade-629510
	I0924 19:27:20.188437   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:20.188360   46646 retry.go:31] will retry after 2.12877775s: waiting for machine to come up
	I0924 19:27:22.319715   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:22.320171   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | unable to find current IP address of domain kubernetes-upgrade-629510 in network mk-kubernetes-upgrade-629510
	I0924 19:27:22.320197   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:22.320124   46646 retry.go:31] will retry after 2.986817188s: waiting for machine to come up
	I0924 19:27:25.310157   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:25.310535   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | unable to find current IP address of domain kubernetes-upgrade-629510 in network mk-kubernetes-upgrade-629510
	I0924 19:27:25.310556   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:25.310492   46646 retry.go:31] will retry after 3.627389726s: waiting for machine to come up
	I0924 19:27:28.941673   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:28.942023   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | unable to find current IP address of domain kubernetes-upgrade-629510 in network mk-kubernetes-upgrade-629510
	I0924 19:27:28.942044   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | I0924 19:27:28.941980   46646 retry.go:31] will retry after 3.942752084s: waiting for machine to come up
	I0924 19:27:32.887688   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:32.888090   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Found IP for machine: 192.168.39.76
	I0924 19:27:32.888118   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has current primary IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:32.888124   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Reserving static IP address...
	I0924 19:27:32.888437   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-629510", mac: "52:54:00:0b:db:d8", ip: "192.168.39.76"} in network mk-kubernetes-upgrade-629510
	I0924 19:27:32.959300   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | Getting to WaitForSSH function...
	I0924 19:27:32.959333   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Reserved static IP address: 192.168.39.76
	I0924 19:27:32.959349   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Waiting for SSH to be available...
	I0924 19:27:32.961921   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:32.962368   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:32.962413   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:32.962588   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | Using SSH client type: external
	I0924 19:27:32.962618   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/kubernetes-upgrade-629510/id_rsa (-rw-------)
	I0924 19:27:32.962659   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/kubernetes-upgrade-629510/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:27:32.962671   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | About to run SSH command:
	I0924 19:27:32.962680   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | exit 0
	I0924 19:27:33.086408   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | SSH cmd err, output: <nil>: 
	I0924 19:27:33.086645   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) KVM machine creation complete!
	I0924 19:27:33.086955   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetConfigRaw
	I0924 19:27:33.087524   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .DriverName
	I0924 19:27:33.087725   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .DriverName
	I0924 19:27:33.087826   46588 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 19:27:33.087839   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetState
	I0924 19:27:33.088958   46588 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 19:27:33.088971   46588 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 19:27:33.088988   46588 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 19:27:33.089009   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHHostname
	I0924 19:27:33.091417   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.091746   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:33.091784   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.091892   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHPort
	I0924 19:27:33.092060   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:27:33.092202   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:27:33.092356   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHUsername
	I0924 19:27:33.092461   46588 main.go:141] libmachine: Using SSH client type: native
	I0924 19:27:33.092735   46588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0924 19:27:33.092750   46588 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 19:27:33.194074   46588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:27:33.194103   46588 main.go:141] libmachine: Detecting the provisioner...
	I0924 19:27:33.194112   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHHostname
	I0924 19:27:33.197230   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.197494   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:33.197518   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.197684   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHPort
	I0924 19:27:33.197851   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:27:33.198031   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:27:33.198150   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHUsername
	I0924 19:27:33.198303   46588 main.go:141] libmachine: Using SSH client type: native
	I0924 19:27:33.198459   46588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0924 19:27:33.198469   46588 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 19:27:33.299163   46588 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 19:27:33.299220   46588 main.go:141] libmachine: found compatible host: buildroot
	I0924 19:27:33.299227   46588 main.go:141] libmachine: Provisioning with buildroot...
	I0924 19:27:33.299235   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetMachineName
	I0924 19:27:33.299429   46588 buildroot.go:166] provisioning hostname "kubernetes-upgrade-629510"
	I0924 19:27:33.299442   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetMachineName
	I0924 19:27:33.299576   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHHostname
	I0924 19:27:33.302073   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.302435   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:33.302458   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.302569   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHPort
	I0924 19:27:33.302736   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:27:33.302897   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:27:33.303029   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHUsername
	I0924 19:27:33.303183   46588 main.go:141] libmachine: Using SSH client type: native
	I0924 19:27:33.303348   46588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0924 19:27:33.303364   46588 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-629510 && echo "kubernetes-upgrade-629510" | sudo tee /etc/hostname
	I0924 19:27:33.419153   46588 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-629510
	
	I0924 19:27:33.419187   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHHostname
	I0924 19:27:33.421831   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.422183   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:33.422207   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.422340   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHPort
	I0924 19:27:33.422517   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:27:33.422685   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:27:33.422859   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHUsername
	I0924 19:27:33.423002   46588 main.go:141] libmachine: Using SSH client type: native
	I0924 19:27:33.423170   46588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0924 19:27:33.423187   46588 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-629510' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-629510/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-629510' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:27:33.530749   46588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:27:33.530782   46588 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:27:33.530805   46588 buildroot.go:174] setting up certificates
	I0924 19:27:33.530816   46588 provision.go:84] configureAuth start
	I0924 19:27:33.530849   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetMachineName
	I0924 19:27:33.531112   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetIP
	I0924 19:27:33.533653   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.533997   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:33.534032   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.534153   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHHostname
	I0924 19:27:33.536225   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.536505   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:33.536534   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.536621   46588 provision.go:143] copyHostCerts
	I0924 19:27:33.536675   46588 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:27:33.536684   46588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:27:33.536747   46588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:27:33.536853   46588 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:27:33.536863   46588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:27:33.536892   46588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:27:33.536954   46588 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:27:33.536961   46588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:27:33.536983   46588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:27:33.537039   46588 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-629510 san=[127.0.0.1 192.168.39.76 kubernetes-upgrade-629510 localhost minikube]
	I0924 19:27:33.600434   46588 provision.go:177] copyRemoteCerts
	I0924 19:27:33.600495   46588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:27:33.600516   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHHostname
	I0924 19:27:33.602906   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.603195   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:33.603223   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.603373   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHPort
	I0924 19:27:33.603543   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:27:33.603692   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHUsername
	I0924 19:27:33.603795   46588 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/kubernetes-upgrade-629510/id_rsa Username:docker}
	I0924 19:27:33.684222   46588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:27:33.706571   46588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0924 19:27:33.727700   46588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 19:27:33.748643   46588 provision.go:87] duration metric: took 217.815228ms to configureAuth
	I0924 19:27:33.748667   46588 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:27:33.748815   46588 config.go:182] Loaded profile config "kubernetes-upgrade-629510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:27:33.748884   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHHostname
	I0924 19:27:33.751247   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.751643   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:33.751664   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.751896   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHPort
	I0924 19:27:33.752079   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:27:33.752286   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:27:33.752460   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHUsername
	I0924 19:27:33.752611   46588 main.go:141] libmachine: Using SSH client type: native
	I0924 19:27:33.752827   46588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0924 19:27:33.752842   46588 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:27:33.958703   46588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:27:33.958730   46588 main.go:141] libmachine: Checking connection to Docker...
	I0924 19:27:33.958740   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetURL
	I0924 19:27:33.959930   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | Using libvirt version 6000000
	I0924 19:27:33.962138   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.962494   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:33.962528   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.962667   46588 main.go:141] libmachine: Docker is up and running!
	I0924 19:27:33.962684   46588 main.go:141] libmachine: Reticulating splines...
	I0924 19:27:33.962692   46588 client.go:171] duration metric: took 23.252394475s to LocalClient.Create
	I0924 19:27:33.962718   46588 start.go:167] duration metric: took 23.252463043s to libmachine.API.Create "kubernetes-upgrade-629510"
	I0924 19:27:33.962731   46588 start.go:293] postStartSetup for "kubernetes-upgrade-629510" (driver="kvm2")
	I0924 19:27:33.962742   46588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:27:33.962758   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .DriverName
	I0924 19:27:33.962988   46588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:27:33.963014   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHHostname
	I0924 19:27:33.965082   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.965398   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:33.965434   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:33.965525   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHPort
	I0924 19:27:33.965707   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:27:33.965830   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHUsername
	I0924 19:27:33.965931   46588 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/kubernetes-upgrade-629510/id_rsa Username:docker}
	I0924 19:27:34.044536   46588 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:27:34.048364   46588 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:27:34.048386   46588 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:27:34.048459   46588 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:27:34.048572   46588 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:27:34.048691   46588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:27:34.057475   46588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:27:34.078958   46588 start.go:296] duration metric: took 116.216256ms for postStartSetup
	I0924 19:27:34.079003   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetConfigRaw
	I0924 19:27:34.079615   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetIP
	I0924 19:27:34.082350   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:34.082691   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:34.082723   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:34.082925   46588 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/config.json ...
	I0924 19:27:34.083104   46588 start.go:128] duration metric: took 23.392845579s to createHost
	I0924 19:27:34.083127   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHHostname
	I0924 19:27:34.085119   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:34.085402   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:34.085425   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:34.085585   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHPort
	I0924 19:27:34.085783   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:27:34.085963   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:27:34.086133   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHUsername
	I0924 19:27:34.086276   46588 main.go:141] libmachine: Using SSH client type: native
	I0924 19:27:34.086437   46588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0924 19:27:34.086451   46588 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:27:34.186996   46588 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727206054.161285372
	
	I0924 19:27:34.187022   46588 fix.go:216] guest clock: 1727206054.161285372
	I0924 19:27:34.187031   46588 fix.go:229] Guest: 2024-09-24 19:27:34.161285372 +0000 UTC Remote: 2024-09-24 19:27:34.083116485 +0000 UTC m=+23.527718830 (delta=78.168887ms)
	I0924 19:27:34.187048   46588 fix.go:200] guest clock delta is within tolerance: 78.168887ms
	I0924 19:27:34.187053   46588 start.go:83] releasing machines lock for "kubernetes-upgrade-629510", held for 23.496896969s
	I0924 19:27:34.187079   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .DriverName
	I0924 19:27:34.187326   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetIP
	I0924 19:27:34.190281   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:34.190645   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:34.190674   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:34.190847   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .DriverName
	I0924 19:27:34.191290   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .DriverName
	I0924 19:27:34.191451   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .DriverName
	I0924 19:27:34.191532   46588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:27:34.191579   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHHostname
	I0924 19:27:34.191694   46588 ssh_runner.go:195] Run: cat /version.json
	I0924 19:27:34.191713   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHHostname
	I0924 19:27:34.194406   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:34.194460   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:34.194816   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:34.194849   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:34.194908   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:34.194934   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:34.194993   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHPort
	I0924 19:27:34.195158   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:27:34.195230   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHPort
	I0924 19:27:34.195327   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHUsername
	I0924 19:27:34.195403   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:27:34.195464   46588 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/kubernetes-upgrade-629510/id_rsa Username:docker}
	I0924 19:27:34.195535   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHUsername
	I0924 19:27:34.195661   46588 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/kubernetes-upgrade-629510/id_rsa Username:docker}
	I0924 19:27:34.295261   46588 ssh_runner.go:195] Run: systemctl --version
	I0924 19:27:34.301167   46588 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:27:34.470256   46588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:27:34.475709   46588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:27:34.475768   46588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:27:34.491280   46588 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:27:34.491303   46588 start.go:495] detecting cgroup driver to use...
	I0924 19:27:34.491359   46588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:27:34.507115   46588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:27:34.520633   46588 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:27:34.520696   46588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:27:34.534263   46588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:27:34.547333   46588 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:27:34.666542   46588 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:27:34.805470   46588 docker.go:233] disabling docker service ...
	I0924 19:27:34.805547   46588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:27:34.822507   46588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:27:34.835858   46588 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:27:34.967943   46588 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:27:35.079279   46588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:27:35.092593   46588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:27:35.110009   46588 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 19:27:35.110070   46588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:27:35.119869   46588 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:27:35.119946   46588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:27:35.129592   46588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:27:35.141466   46588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:27:35.150922   46588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:27:35.160478   46588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:27:35.168927   46588 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:27:35.168973   46588 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:27:35.180666   46588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:27:35.189497   46588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:27:35.306767   46588 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:27:35.400924   46588 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:27:35.401027   46588 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:27:35.406364   46588 start.go:563] Will wait 60s for crictl version
	I0924 19:27:35.406426   46588 ssh_runner.go:195] Run: which crictl
	I0924 19:27:35.410609   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:27:35.449211   46588 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:27:35.449301   46588 ssh_runner.go:195] Run: crio --version
	I0924 19:27:35.477597   46588 ssh_runner.go:195] Run: crio --version
	I0924 19:27:35.510399   46588 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 19:27:35.512401   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetIP
	I0924 19:27:35.515744   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:35.516087   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:27:35.516116   46588 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:27:35.516383   46588 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 19:27:35.520608   46588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:27:35.532540   46588 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-629510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-629510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:27:35.532673   46588 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:27:35.532782   46588 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:27:35.562673   46588 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:27:35.562745   46588 ssh_runner.go:195] Run: which lz4
	I0924 19:27:35.566597   46588 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:27:35.570436   46588 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:27:35.570473   46588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 19:27:36.976054   46588 crio.go:462] duration metric: took 1.409488754s to copy over tarball
	I0924 19:27:36.976129   46588 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:27:39.444081   46588 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.467918436s)
	I0924 19:27:39.444113   46588 crio.go:469] duration metric: took 2.468026016s to extract the tarball
	I0924 19:27:39.444122   46588 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:27:39.485468   46588 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:27:39.528971   46588 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:27:39.529002   46588 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 19:27:39.529080   46588 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:27:39.529109   46588 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:27:39.529127   46588 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 19:27:39.529131   46588 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:27:39.529163   46588 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 19:27:39.529183   46588 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:27:39.529095   46588 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:27:39.529069   46588 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:27:39.530955   46588 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 19:27:39.530970   46588 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:27:39.530953   46588 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:27:39.530954   46588 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 19:27:39.530961   46588 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:27:39.530962   46588 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:27:39.531013   46588 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:27:39.530985   46588 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:27:39.678529   46588 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:27:39.682979   46588 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:27:39.690028   46588 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 19:27:39.703574   46588 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 19:27:39.704126   46588 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 19:27:39.739380   46588 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:27:39.754583   46588 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:27:39.759076   46588 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 19:27:39.759120   46588 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:27:39.759164   46588 ssh_runner.go:195] Run: which crictl
	I0924 19:27:39.787794   46588 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 19:27:39.787840   46588 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:27:39.787888   46588 ssh_runner.go:195] Run: which crictl
	I0924 19:27:39.815719   46588 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 19:27:39.815764   46588 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 19:27:39.815825   46588 ssh_runner.go:195] Run: which crictl
	I0924 19:27:39.815827   46588 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 19:27:39.815867   46588 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:27:39.815915   46588 ssh_runner.go:195] Run: which crictl
	I0924 19:27:39.821592   46588 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 19:27:39.821633   46588 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 19:27:39.821691   46588 ssh_runner.go:195] Run: which crictl
	I0924 19:27:39.860636   46588 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 19:27:39.860677   46588 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:27:39.860720   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:27:39.860724   46588 ssh_runner.go:195] Run: which crictl
	I0924 19:27:39.860818   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:27:39.860782   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:27:39.860833   46588 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 19:27:39.860893   46588 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:27:39.860906   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:27:39.860921   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:27:39.860927   46588 ssh_runner.go:195] Run: which crictl
	I0924 19:27:39.864704   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:27:39.947870   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:27:39.947912   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:27:39.977598   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:27:39.977606   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:27:39.990491   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:27:39.990501   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:27:39.990614   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:27:40.082208   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:27:40.082219   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:27:40.110059   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:27:40.110089   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:27:40.135071   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:27:40.135106   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:27:40.135113   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:27:40.181712   46588 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 19:27:40.181865   46588 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 19:27:40.243857   46588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:27:40.243926   46588 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 19:27:40.276172   46588 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 19:27:40.276236   46588 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 19:27:40.276241   46588 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 19:27:40.281241   46588 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 19:27:40.474509   46588 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:27:40.609160   46588 cache_images.go:92] duration metric: took 1.080138224s to LoadCachedImages
	W0924 19:27:40.609245   46588 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0924 19:27:40.609257   46588 kubeadm.go:934] updating node { 192.168.39.76 8443 v1.20.0 crio true true} ...
	I0924 19:27:40.609382   46588 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-629510 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-629510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:27:40.609458   46588 ssh_runner.go:195] Run: crio config
	I0924 19:27:40.654842   46588 cni.go:84] Creating CNI manager for ""
	I0924 19:27:40.654870   46588 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:27:40.654883   46588 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:27:40.654913   46588 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.76 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-629510 NodeName:kubernetes-upgrade-629510 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 19:27:40.655084   46588 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-629510"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.76
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.76"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:27:40.655137   46588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 19:27:40.664875   46588 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:27:40.664936   46588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:27:40.675047   46588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0924 19:27:40.694190   46588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:27:40.712297   46588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0924 19:27:40.730396   46588 ssh_runner.go:195] Run: grep 192.168.39.76	control-plane.minikube.internal$ /etc/hosts
	I0924 19:27:40.734126   46588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:27:40.745668   46588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:27:40.860851   46588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:27:40.876661   46588 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510 for IP: 192.168.39.76
	I0924 19:27:40.876690   46588 certs.go:194] generating shared ca certs ...
	I0924 19:27:40.876721   46588 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:27:40.876898   46588 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:27:40.876949   46588 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:27:40.876962   46588 certs.go:256] generating profile certs ...
	I0924 19:27:40.877037   46588 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/client.key
	I0924 19:27:40.877057   46588 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/client.crt with IP's: []
	I0924 19:27:41.057577   46588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/client.crt ...
	I0924 19:27:41.057609   46588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/client.crt: {Name:mkd6fb12a9c6a9cafd115e4aae18158ba3806380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:27:41.057775   46588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/client.key ...
	I0924 19:27:41.057789   46588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/client.key: {Name:mk6c60686d4695abe876f2cf333a7455b9e04cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:27:41.057865   46588 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/apiserver.key.49ff1e12
	I0924 19:27:41.057881   46588 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/apiserver.crt.49ff1e12 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.76]
	I0924 19:27:41.344430   46588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/apiserver.crt.49ff1e12 ...
	I0924 19:27:41.344457   46588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/apiserver.crt.49ff1e12: {Name:mkb38aac403c8130b4a436842d043b826e4175b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:27:41.344609   46588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/apiserver.key.49ff1e12 ...
	I0924 19:27:41.344622   46588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/apiserver.key.49ff1e12: {Name:mk02c97a19ee42c445058b3bf1394ffd72688e05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:27:41.344686   46588 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/apiserver.crt.49ff1e12 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/apiserver.crt
	I0924 19:27:41.344752   46588 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/apiserver.key.49ff1e12 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/apiserver.key
	I0924 19:27:41.344819   46588 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/proxy-client.key
	I0924 19:27:41.344834   46588 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/proxy-client.crt with IP's: []
	I0924 19:27:41.393193   46588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/proxy-client.crt ...
	I0924 19:27:41.393223   46588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/proxy-client.crt: {Name:mk4ba00660e88c2f9f1f441f99d461ca7a0be1ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:27:41.393377   46588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/proxy-client.key ...
	I0924 19:27:41.393390   46588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/proxy-client.key: {Name:mkb2bd472548b271328d05208bd24672d80d2d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:27:41.393553   46588 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:27:41.393601   46588 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:27:41.393611   46588 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:27:41.393631   46588 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:27:41.393654   46588 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:27:41.393675   46588 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:27:41.393711   46588 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:27:41.394295   46588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:27:41.418416   46588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:27:41.441107   46588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:27:41.463008   46588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:27:41.485071   46588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0924 19:27:41.507268   46588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 19:27:41.530305   46588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:27:41.552354   46588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:27:41.574095   46588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:27:41.595691   46588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:27:41.617523   46588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:27:41.639115   46588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:27:41.656554   46588 ssh_runner.go:195] Run: openssl version
	I0924 19:27:41.662189   46588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:27:41.672031   46588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:27:41.676384   46588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:27:41.676436   46588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:27:41.681914   46588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:27:41.695788   46588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:27:41.710704   46588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:27:41.719644   46588 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:27:41.719718   46588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:27:41.727453   46588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:27:41.740375   46588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:27:41.752625   46588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:27:41.756838   46588 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:27:41.756886   46588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:27:41.762029   46588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:27:41.771790   46588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:27:41.775656   46588 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 19:27:41.775713   46588 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-629510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-629510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:27:41.775799   46588 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:27:41.775845   46588 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:27:41.811348   46588 cri.go:89] found id: ""
	I0924 19:27:41.811428   46588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:27:41.820615   46588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:27:41.829185   46588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:27:41.837598   46588 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:27:41.837626   46588 kubeadm.go:157] found existing configuration files:
	
	I0924 19:27:41.837670   46588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:27:41.846037   46588 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:27:41.846094   46588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:27:41.854981   46588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:27:41.863380   46588 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:27:41.863437   46588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:27:41.871971   46588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:27:41.880007   46588 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:27:41.880064   46588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:27:41.888414   46588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:27:41.896218   46588 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:27:41.896279   46588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:27:41.904497   46588 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:27:42.006760   46588 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:27:42.006885   46588 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:27:42.158665   46588 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:27:42.158766   46588 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:27:42.158939   46588 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:27:42.321928   46588 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:27:42.492449   46588 out.go:235]   - Generating certificates and keys ...
	I0924 19:27:42.492575   46588 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:27:42.492703   46588 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:27:42.600255   46588 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 19:27:42.854802   46588 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 19:27:43.349512   46588 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 19:27:43.442426   46588 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 19:27:43.748109   46588 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 19:27:43.748347   46588 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-629510 localhost] and IPs [192.168.39.76 127.0.0.1 ::1]
	I0924 19:27:43.842109   46588 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 19:27:43.842350   46588 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-629510 localhost] and IPs [192.168.39.76 127.0.0.1 ::1]
	I0924 19:27:43.945953   46588 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 19:27:44.138154   46588 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 19:27:44.262842   46588 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 19:27:44.263005   46588 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:27:44.578756   46588 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:27:44.825799   46588 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:27:44.965049   46588 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:27:45.160865   46588 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:27:45.175297   46588 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:27:45.175921   46588 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:27:45.176005   46588 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:27:45.305970   46588 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:27:45.308431   46588 out.go:235]   - Booting up control plane ...
	I0924 19:27:45.308581   46588 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:27:45.316637   46588 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:27:45.318154   46588 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:27:45.319336   46588 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:27:45.324567   46588 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:28:25.316731   46588 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:28:25.316870   46588 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:28:25.317144   46588 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:28:30.317635   46588 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:28:30.317898   46588 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:28:40.317442   46588 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:28:40.317733   46588 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:29:00.317231   46588 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:29:00.317477   46588 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:29:40.318336   46588 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:29:40.318532   46588 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:29:40.318541   46588 kubeadm.go:310] 
	I0924 19:29:40.318593   46588 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:29:40.318633   46588 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:29:40.318668   46588 kubeadm.go:310] 
	I0924 19:29:40.318722   46588 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:29:40.318777   46588 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:29:40.318923   46588 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:29:40.318932   46588 kubeadm.go:310] 
	I0924 19:29:40.319023   46588 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:29:40.319053   46588 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:29:40.319080   46588 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:29:40.319086   46588 kubeadm.go:310] 
	I0924 19:29:40.319208   46588 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:29:40.319300   46588 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:29:40.319311   46588 kubeadm.go:310] 
	I0924 19:29:40.319440   46588 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:29:40.319570   46588 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:29:40.319678   46588 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:29:40.319786   46588 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:29:40.319806   46588 kubeadm.go:310] 
	I0924 19:29:40.320098   46588 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:29:40.320195   46588 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:29:40.320295   46588 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 19:29:40.320439   46588 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-629510 localhost] and IPs [192.168.39.76 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-629510 localhost] and IPs [192.168.39.76 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-629510 localhost] and IPs [192.168.39.76 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-629510 localhost] and IPs [192.168.39.76 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 19:29:40.320481   46588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:29:41.683822   46588 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.363311807s)
	I0924 19:29:41.683909   46588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:29:41.697131   46588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:29:41.706243   46588 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:29:41.706261   46588 kubeadm.go:157] found existing configuration files:
	
	I0924 19:29:41.706299   46588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:29:41.715005   46588 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:29:41.715049   46588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:29:41.724247   46588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:29:41.732926   46588 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:29:41.732978   46588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:29:41.742452   46588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:29:41.751943   46588 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:29:41.751999   46588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:29:41.761584   46588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:29:41.771018   46588 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:29:41.771068   46588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:29:41.780925   46588 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:29:41.847829   46588 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:29:41.847891   46588 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:29:41.983124   46588 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:29:41.983248   46588 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:29:41.983384   46588 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:29:42.150732   46588 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:29:42.153551   46588 out.go:235]   - Generating certificates and keys ...
	I0924 19:29:42.153646   46588 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:29:42.153726   46588 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:29:42.153835   46588 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:29:42.153948   46588 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:29:42.154059   46588 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:29:42.154150   46588 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:29:42.154224   46588 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:29:42.154281   46588 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:29:42.154356   46588 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:29:42.154487   46588 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:29:42.154539   46588 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:29:42.154615   46588 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:29:42.332985   46588 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:29:42.526776   46588 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:29:42.930387   46588 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:29:43.279651   46588 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:29:43.297186   46588 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:29:43.297303   46588 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:29:43.297360   46588 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:29:43.447573   46588 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:29:43.449799   46588 out.go:235]   - Booting up control plane ...
	I0924 19:29:43.449946   46588 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:29:43.452092   46588 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:29:43.453135   46588 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:29:43.453884   46588 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:29:43.457050   46588 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:30:23.461106   46588 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:30:23.461395   46588 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:30:23.461681   46588 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:30:28.461982   46588 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:30:28.462177   46588 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:30:38.462609   46588 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:30:38.462901   46588 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:30:58.461772   46588 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:30:58.461996   46588 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:31:38.461251   46588 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:31:38.461482   46588 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:31:38.461504   46588 kubeadm.go:310] 
	I0924 19:31:38.461547   46588 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:31:38.461597   46588 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:31:38.461605   46588 kubeadm.go:310] 
	I0924 19:31:38.461660   46588 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:31:38.461717   46588 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:31:38.461823   46588 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:31:38.461834   46588 kubeadm.go:310] 
	I0924 19:31:38.461987   46588 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:31:38.462035   46588 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:31:38.462067   46588 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:31:38.462074   46588 kubeadm.go:310] 
	I0924 19:31:38.462167   46588 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:31:38.462239   46588 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:31:38.462245   46588 kubeadm.go:310] 
	I0924 19:31:38.462338   46588 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:31:38.462419   46588 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:31:38.462484   46588 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:31:38.462552   46588 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:31:38.462559   46588 kubeadm.go:310] 
	I0924 19:31:38.463339   46588 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:31:38.463446   46588 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:31:38.463523   46588 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 19:31:38.463601   46588 kubeadm.go:394] duration metric: took 3m56.687892158s to StartCluster
	I0924 19:31:38.463640   46588 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:31:38.463693   46588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:31:38.497281   46588 cri.go:89] found id: ""
	I0924 19:31:38.497310   46588 logs.go:276] 0 containers: []
	W0924 19:31:38.497319   46588 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:31:38.497327   46588 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:31:38.497391   46588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:31:38.534147   46588 cri.go:89] found id: ""
	I0924 19:31:38.534172   46588 logs.go:276] 0 containers: []
	W0924 19:31:38.534180   46588 logs.go:278] No container was found matching "etcd"
	I0924 19:31:38.534186   46588 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:31:38.534244   46588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:31:38.567606   46588 cri.go:89] found id: ""
	I0924 19:31:38.567635   46588 logs.go:276] 0 containers: []
	W0924 19:31:38.567645   46588 logs.go:278] No container was found matching "coredns"
	I0924 19:31:38.567651   46588 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:31:38.567712   46588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:31:38.603426   46588 cri.go:89] found id: ""
	I0924 19:31:38.603459   46588 logs.go:276] 0 containers: []
	W0924 19:31:38.603471   46588 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:31:38.603479   46588 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:31:38.603550   46588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:31:38.636373   46588 cri.go:89] found id: ""
	I0924 19:31:38.636398   46588 logs.go:276] 0 containers: []
	W0924 19:31:38.636407   46588 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:31:38.636420   46588 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:31:38.636475   46588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:31:38.668976   46588 cri.go:89] found id: ""
	I0924 19:31:38.669001   46588 logs.go:276] 0 containers: []
	W0924 19:31:38.669009   46588 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:31:38.669015   46588 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:31:38.669075   46588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:31:38.704573   46588 cri.go:89] found id: ""
	I0924 19:31:38.704597   46588 logs.go:276] 0 containers: []
	W0924 19:31:38.704605   46588 logs.go:278] No container was found matching "kindnet"
	I0924 19:31:38.704612   46588 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:31:38.704634   46588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:31:38.807716   46588 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:31:38.807742   46588 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:31:38.807757   46588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:31:38.917665   46588 logs.go:123] Gathering logs for container status ...
	I0924 19:31:38.917711   46588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:31:38.956750   46588 logs.go:123] Gathering logs for kubelet ...
	I0924 19:31:38.956781   46588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:31:39.023277   46588 logs.go:123] Gathering logs for dmesg ...
	I0924 19:31:39.023307   46588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0924 19:31:39.039118   46588 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 19:31:39.039203   46588 out.go:270] * 
	* 
	W0924 19:31:39.039264   46588 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:31:39.039283   46588 out.go:270] * 
	* 
	W0924 19:31:39.040362   46588 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 19:31:39.044084   46588 out.go:201] 
	W0924 19:31:39.045580   46588 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:31:39.045640   46588 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 19:31:39.045670   46588 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 19:31:39.046999   46588 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-629510 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-629510
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-629510: (6.300418395s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-629510 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-629510 status --format={{.Host}}: exit status 7 (85.667808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-629510 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-629510 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.428340298s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-629510 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-629510 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-629510 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (76.3091ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-629510] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-629510
	    minikube start -p kubernetes-upgrade-629510 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6295102 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-629510 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-629510 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-629510 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (30.816331067s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-24 19:33:12.880144249 +0000 UTC m=+4401.982155945
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-629510 -n kubernetes-upgrade-629510
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-629510 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-629510 logs -n 25: (1.635027013s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-038637 sudo                  | cilium-038637             | jenkins | v1.34.0 | 24 Sep 24 19:30 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-038637 sudo cat              | cilium-038637             | jenkins | v1.34.0 | 24 Sep 24 19:30 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-038637 sudo cat              | cilium-038637             | jenkins | v1.34.0 | 24 Sep 24 19:30 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-038637 sudo                  | cilium-038637             | jenkins | v1.34.0 | 24 Sep 24 19:30 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-038637 sudo                  | cilium-038637             | jenkins | v1.34.0 | 24 Sep 24 19:30 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-038637 sudo                  | cilium-038637             | jenkins | v1.34.0 | 24 Sep 24 19:30 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-038637 sudo find             | cilium-038637             | jenkins | v1.34.0 | 24 Sep 24 19:30 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-038637 sudo crio             | cilium-038637             | jenkins | v1.34.0 | 24 Sep 24 19:30 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-038637                       | cilium-038637             | jenkins | v1.34.0 | 24 Sep 24 19:30 UTC | 24 Sep 24 19:30 UTC |
	| start   | -p cert-expiration-563000              | cert-expiration-563000    | jenkins | v1.34.0 | 24 Sep 24 19:30 UTC | 24 Sep 24 19:31 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-166165           | force-systemd-flag-166165 | jenkins | v1.34.0 | 24 Sep 24 19:30 UTC | 24 Sep 24 19:32 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-940861            | force-systemd-env-940861  | jenkins | v1.34.0 | 24 Sep 24 19:31 UTC | 24 Sep 24 19:31 UTC |
	| start   | -p pause-058963 --memory=2048          | pause-058963              | jenkins | v1.34.0 | 24 Sep 24 19:31 UTC | 24 Sep 24 19:33 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-629510           | kubernetes-upgrade-629510 | jenkins | v1.34.0 | 24 Sep 24 19:31 UTC | 24 Sep 24 19:31 UTC |
	| start   | -p kubernetes-upgrade-629510           | kubernetes-upgrade-629510 | jenkins | v1.34.0 | 24 Sep 24 19:31 UTC | 24 Sep 24 19:32 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-166165 ssh cat      | force-systemd-flag-166165 | jenkins | v1.34.0 | 24 Sep 24 19:32 UTC | 24 Sep 24 19:32 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-166165           | force-systemd-flag-166165 | jenkins | v1.34.0 | 24 Sep 24 19:32 UTC | 24 Sep 24 19:32 UTC |
	| start   | -p cert-options-103452                 | cert-options-103452       | jenkins | v1.34.0 | 24 Sep 24 19:32 UTC | 24 Sep 24 19:33 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-629510           | kubernetes-upgrade-629510 | jenkins | v1.34.0 | 24 Sep 24 19:32 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-629510           | kubernetes-upgrade-629510 | jenkins | v1.34.0 | 24 Sep 24 19:32 UTC | 24 Sep 24 19:33 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-058963                        | pause-058963              | jenkins | v1.34.0 | 24 Sep 24 19:33 UTC |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-103452 ssh                | cert-options-103452       | jenkins | v1.34.0 | 24 Sep 24 19:33 UTC | 24 Sep 24 19:33 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-103452 -- sudo         | cert-options-103452       | jenkins | v1.34.0 | 24 Sep 24 19:33 UTC | 24 Sep 24 19:33 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-103452                 | cert-options-103452       | jenkins | v1.34.0 | 24 Sep 24 19:33 UTC | 24 Sep 24 19:33 UTC |
	| start   | -p auto-038637 --memory=3072           | auto-038637               | jenkins | v1.34.0 | 24 Sep 24 19:33 UTC |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 19:33:08
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 19:33:08.474394   54332 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:33:08.474480   54332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:33:08.474487   54332 out.go:358] Setting ErrFile to fd 2...
	I0924 19:33:08.474491   54332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:33:08.474678   54332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:33:08.475283   54332 out.go:352] Setting JSON to false
	I0924 19:33:08.476206   54332 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4539,"bootTime":1727201849,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 19:33:08.476291   54332 start.go:139] virtualization: kvm guest
	I0924 19:33:08.478928   54332 out.go:177] * [auto-038637] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 19:33:08.480218   54332 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 19:33:08.480283   54332 notify.go:220] Checking for updates...
	I0924 19:33:08.482551   54332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 19:33:08.483873   54332 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:33:08.485351   54332 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:33:08.486503   54332 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 19:33:08.487551   54332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 19:33:08.489309   54332 config.go:182] Loaded profile config "cert-expiration-563000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:33:08.489459   54332 config.go:182] Loaded profile config "kubernetes-upgrade-629510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:33:08.489657   54332 config.go:182] Loaded profile config "pause-058963": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:33:08.489763   54332 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 19:33:08.526616   54332 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 19:33:08.528227   54332 start.go:297] selected driver: kvm2
	I0924 19:33:08.528245   54332 start.go:901] validating driver "kvm2" against <nil>
	I0924 19:33:08.528260   54332 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 19:33:08.529089   54332 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:33:08.529186   54332 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 19:33:08.545720   54332 install.go:137] /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 19:33:08.545771   54332 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 19:33:08.546044   54332 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:33:08.546073   54332 cni.go:84] Creating CNI manager for ""
	I0924 19:33:08.546106   54332 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:33:08.546122   54332 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 19:33:08.546179   54332 start.go:340] cluster config:
	{Name:auto-038637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-038637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:33:08.546300   54332 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:33:08.548094   54332 out.go:177] * Starting "auto-038637" primary control-plane node in "auto-038637" cluster
	I0924 19:33:05.235453   54102 machine.go:93] provisionDockerMachine start ...
	I0924 19:33:05.235476   54102 main.go:141] libmachine: (pause-058963) Calling .DriverName
	I0924 19:33:05.235723   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHHostname
	I0924 19:33:05.238364   54102 main.go:141] libmachine: (pause-058963) DBG | domain pause-058963 has defined MAC address 52:54:00:6b:20:e5 in network mk-pause-058963
	I0924 19:33:05.238853   54102 main.go:141] libmachine: (pause-058963) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:20:e5", ip: ""} in network mk-pause-058963: {Iface:virbr4 ExpiryTime:2024-09-24 20:31:57 +0000 UTC Type:0 Mac:52:54:00:6b:20:e5 Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:pause-058963 Clientid:01:52:54:00:6b:20:e5}
	I0924 19:33:05.238882   54102 main.go:141] libmachine: (pause-058963) DBG | domain pause-058963 has defined IP address 192.168.50.184 and MAC address 52:54:00:6b:20:e5 in network mk-pause-058963
	I0924 19:33:05.239004   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHPort
	I0924 19:33:05.239176   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHKeyPath
	I0924 19:33:05.239339   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHKeyPath
	I0924 19:33:05.239462   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHUsername
	I0924 19:33:05.239636   54102 main.go:141] libmachine: Using SSH client type: native
	I0924 19:33:05.239845   54102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0924 19:33:05.239864   54102 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:33:05.351733   54102 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-058963
	
	I0924 19:33:05.351759   54102 main.go:141] libmachine: (pause-058963) Calling .GetMachineName
	I0924 19:33:05.352005   54102 buildroot.go:166] provisioning hostname "pause-058963"
	I0924 19:33:05.352030   54102 main.go:141] libmachine: (pause-058963) Calling .GetMachineName
	I0924 19:33:05.352224   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHHostname
	I0924 19:33:05.354993   54102 main.go:141] libmachine: (pause-058963) DBG | domain pause-058963 has defined MAC address 52:54:00:6b:20:e5 in network mk-pause-058963
	I0924 19:33:05.355320   54102 main.go:141] libmachine: (pause-058963) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:20:e5", ip: ""} in network mk-pause-058963: {Iface:virbr4 ExpiryTime:2024-09-24 20:31:57 +0000 UTC Type:0 Mac:52:54:00:6b:20:e5 Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:pause-058963 Clientid:01:52:54:00:6b:20:e5}
	I0924 19:33:05.355346   54102 main.go:141] libmachine: (pause-058963) DBG | domain pause-058963 has defined IP address 192.168.50.184 and MAC address 52:54:00:6b:20:e5 in network mk-pause-058963
	I0924 19:33:05.355495   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHPort
	I0924 19:33:05.355687   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHKeyPath
	I0924 19:33:05.355843   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHKeyPath
	I0924 19:33:05.355959   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHUsername
	I0924 19:33:05.356091   54102 main.go:141] libmachine: Using SSH client type: native
	I0924 19:33:05.356268   54102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0924 19:33:05.356286   54102 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-058963 && echo "pause-058963" | sudo tee /etc/hostname
	I0924 19:33:05.479020   54102 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-058963
	
	I0924 19:33:05.479064   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHHostname
	I0924 19:33:05.481775   54102 main.go:141] libmachine: (pause-058963) DBG | domain pause-058963 has defined MAC address 52:54:00:6b:20:e5 in network mk-pause-058963
	I0924 19:33:05.482205   54102 main.go:141] libmachine: (pause-058963) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:20:e5", ip: ""} in network mk-pause-058963: {Iface:virbr4 ExpiryTime:2024-09-24 20:31:57 +0000 UTC Type:0 Mac:52:54:00:6b:20:e5 Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:pause-058963 Clientid:01:52:54:00:6b:20:e5}
	I0924 19:33:05.482235   54102 main.go:141] libmachine: (pause-058963) DBG | domain pause-058963 has defined IP address 192.168.50.184 and MAC address 52:54:00:6b:20:e5 in network mk-pause-058963
	I0924 19:33:05.482408   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHPort
	I0924 19:33:05.482602   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHKeyPath
	I0924 19:33:05.482802   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHKeyPath
	I0924 19:33:05.482959   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHUsername
	I0924 19:33:05.483099   54102 main.go:141] libmachine: Using SSH client type: native
	I0924 19:33:05.483281   54102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0924 19:33:05.483303   54102 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-058963' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-058963/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-058963' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:33:05.595929   54102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:33:05.595963   54102 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:33:05.595983   54102 buildroot.go:174] setting up certificates
	I0924 19:33:05.595992   54102 provision.go:84] configureAuth start
	I0924 19:33:05.596000   54102 main.go:141] libmachine: (pause-058963) Calling .GetMachineName
	I0924 19:33:05.596284   54102 main.go:141] libmachine: (pause-058963) Calling .GetIP
	I0924 19:33:05.599061   54102 main.go:141] libmachine: (pause-058963) DBG | domain pause-058963 has defined MAC address 52:54:00:6b:20:e5 in network mk-pause-058963
	I0924 19:33:05.599363   54102 main.go:141] libmachine: (pause-058963) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:20:e5", ip: ""} in network mk-pause-058963: {Iface:virbr4 ExpiryTime:2024-09-24 20:31:57 +0000 UTC Type:0 Mac:52:54:00:6b:20:e5 Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:pause-058963 Clientid:01:52:54:00:6b:20:e5}
	I0924 19:33:05.599408   54102 main.go:141] libmachine: (pause-058963) DBG | domain pause-058963 has defined IP address 192.168.50.184 and MAC address 52:54:00:6b:20:e5 in network mk-pause-058963
	I0924 19:33:05.599503   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHHostname
	I0924 19:33:05.601841   54102 main.go:141] libmachine: (pause-058963) DBG | domain pause-058963 has defined MAC address 52:54:00:6b:20:e5 in network mk-pause-058963
	I0924 19:33:05.602236   54102 main.go:141] libmachine: (pause-058963) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:20:e5", ip: ""} in network mk-pause-058963: {Iface:virbr4 ExpiryTime:2024-09-24 20:31:57 +0000 UTC Type:0 Mac:52:54:00:6b:20:e5 Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:pause-058963 Clientid:01:52:54:00:6b:20:e5}
	I0924 19:33:05.602267   54102 main.go:141] libmachine: (pause-058963) DBG | domain pause-058963 has defined IP address 192.168.50.184 and MAC address 52:54:00:6b:20:e5 in network mk-pause-058963
	I0924 19:33:05.602386   54102 provision.go:143] copyHostCerts
	I0924 19:33:05.602457   54102 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:33:05.602470   54102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:33:05.602534   54102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:33:05.602653   54102 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:33:05.602664   54102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:33:05.602697   54102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:33:05.602771   54102 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:33:05.602780   54102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:33:05.602809   54102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:33:05.602899   54102 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.pause-058963 san=[127.0.0.1 192.168.50.184 localhost minikube pause-058963]
	I0924 19:33:05.909367   54102 provision.go:177] copyRemoteCerts
	I0924 19:33:05.909452   54102 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:33:05.909481   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHHostname
	I0924 19:33:05.912844   54102 main.go:141] libmachine: (pause-058963) DBG | domain pause-058963 has defined MAC address 52:54:00:6b:20:e5 in network mk-pause-058963
	I0924 19:33:05.913233   54102 main.go:141] libmachine: (pause-058963) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:20:e5", ip: ""} in network mk-pause-058963: {Iface:virbr4 ExpiryTime:2024-09-24 20:31:57 +0000 UTC Type:0 Mac:52:54:00:6b:20:e5 Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:pause-058963 Clientid:01:52:54:00:6b:20:e5}
	I0924 19:33:05.913267   54102 main.go:141] libmachine: (pause-058963) DBG | domain pause-058963 has defined IP address 192.168.50.184 and MAC address 52:54:00:6b:20:e5 in network mk-pause-058963
	I0924 19:33:05.913568   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHPort
	I0924 19:33:05.913798   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHKeyPath
	I0924 19:33:05.913995   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHUsername
	I0924 19:33:05.914180   54102 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/pause-058963/id_rsa Username:docker}
	I0924 19:33:06.013258   54102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:33:06.045569   54102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0924 19:33:06.080169   54102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:33:06.109557   54102 provision.go:87] duration metric: took 513.551093ms to configureAuth
	I0924 19:33:06.109590   54102 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:33:06.109862   54102 config.go:182] Loaded profile config "pause-058963": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:33:06.109983   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHHostname
	I0924 19:33:06.113277   54102 main.go:141] libmachine: (pause-058963) DBG | domain pause-058963 has defined MAC address 52:54:00:6b:20:e5 in network mk-pause-058963
	I0924 19:33:06.113671   54102 main.go:141] libmachine: (pause-058963) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:20:e5", ip: ""} in network mk-pause-058963: {Iface:virbr4 ExpiryTime:2024-09-24 20:31:57 +0000 UTC Type:0 Mac:52:54:00:6b:20:e5 Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:pause-058963 Clientid:01:52:54:00:6b:20:e5}
	I0924 19:33:06.113695   54102 main.go:141] libmachine: (pause-058963) DBG | domain pause-058963 has defined IP address 192.168.50.184 and MAC address 52:54:00:6b:20:e5 in network mk-pause-058963
	I0924 19:33:06.114025   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHPort
	I0924 19:33:06.114237   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHKeyPath
	I0924 19:33:06.114412   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHKeyPath
	I0924 19:33:06.114561   54102 main.go:141] libmachine: (pause-058963) Calling .GetSSHUsername
	I0924 19:33:06.114728   54102 main.go:141] libmachine: Using SSH client type: native
	I0924 19:33:06.114992   54102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0924 19:33:06.115019   54102 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:33:07.145150   53912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:33:07.159837   53912 api_server.go:72] duration metric: took 1.015659952s to wait for apiserver process to appear ...
	I0924 19:33:07.159865   53912 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:33:07.159886   53912 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0924 19:33:09.726295   53912 api_server.go:279] https://192.168.39.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:33:09.726326   53912 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:33:09.726340   53912 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0924 19:33:09.764760   53912 api_server.go:279] https://192.168.39.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:33:09.764786   53912 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:33:10.160217   53912 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0924 19:33:10.168007   53912 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:33:10.168031   53912 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:33:10.660811   53912 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0924 19:33:10.668663   53912 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:33:10.668692   53912 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:33:11.160199   53912 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0924 19:33:11.164373   53912 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0924 19:33:11.170992   53912 api_server.go:141] control plane version: v1.31.1
	I0924 19:33:11.171018   53912 api_server.go:131] duration metric: took 4.011145089s to wait for apiserver health ...
	I0924 19:33:11.171027   53912 cni.go:84] Creating CNI manager for ""
	I0924 19:33:11.171036   53912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:33:11.172937   53912 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:33:11.174227   53912 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:33:11.184794   53912 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:33:11.203586   53912 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:33:11.203666   53912 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0924 19:33:11.203685   53912 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0924 19:33:11.213134   53912 system_pods.go:59] 8 kube-system pods found
	I0924 19:33:11.213169   53912 system_pods.go:61] "coredns-7c65d6cfc9-4dv5g" [52302d2b-8d0a-430f-8a44-2239049498bf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:33:11.213182   53912 system_pods.go:61] "coredns-7c65d6cfc9-fblwg" [18f3f174-56e8-4534-9316-aee8bfc4740e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:33:11.213190   53912 system_pods.go:61] "etcd-kubernetes-upgrade-629510" [2c0a1ad8-1593-4035-b897-a60bf23b1b7d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:33:11.213199   53912 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-629510" [462cdd76-29e5-4905-a11c-24c1ce426603] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:33:11.213213   53912 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-629510" [d428b245-7b0d-46fe-bc26-96bb50b0d232] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:33:11.213223   53912 system_pods.go:61] "kube-proxy-fq4b4" [2b5c3aab-5d51-44e1-94d5-99d6ecc8b772] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 19:33:11.213231   53912 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-629510" [3db97de6-d0a2-460b-bfbd-9d0b38eac912] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:33:11.213237   53912 system_pods.go:61] "storage-provisioner" [858d52d1-814f-4f13-8d78-7e9a7d28731f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 19:33:11.213244   53912 system_pods.go:74] duration metric: took 9.638711ms to wait for pod list to return data ...
	I0924 19:33:11.213254   53912 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:33:11.216386   53912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:33:11.216408   53912 node_conditions.go:123] node cpu capacity is 2
	I0924 19:33:11.216417   53912 node_conditions.go:105] duration metric: took 3.159163ms to run NodePressure ...
	I0924 19:33:11.216447   53912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:33:11.506956   53912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:33:11.518312   53912 ops.go:34] apiserver oom_adj: -16
	I0924 19:33:11.518350   53912 kubeadm.go:597] duration metric: took 7.786897844s to restartPrimaryControlPlane
	I0924 19:33:11.518366   53912 kubeadm.go:394] duration metric: took 7.900135427s to StartCluster
	I0924 19:33:11.518415   53912 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:33:11.518518   53912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:33:11.519731   53912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:33:11.519946   53912 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:33:11.520018   53912 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:33:11.520112   53912 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-629510"
	I0924 19:33:11.520131   53912 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-629510"
	I0924 19:33:11.520136   53912 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-629510"
	I0924 19:33:11.520161   53912 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-629510"
	I0924 19:33:11.520217   53912 config.go:182] Loaded profile config "kubernetes-upgrade-629510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0924 19:33:11.520143   53912 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:33:11.520300   53912 host.go:66] Checking if "kubernetes-upgrade-629510" exists ...
	I0924 19:33:11.520613   53912 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:33:11.520653   53912 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:33:11.520664   53912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:33:11.520688   53912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:33:11.522200   53912 out.go:177] * Verifying Kubernetes components...
	I0924 19:33:11.523781   53912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:33:11.538138   53912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0924 19:33:11.538643   53912 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:33:11.539260   53912 main.go:141] libmachine: Using API Version  1
	I0924 19:33:11.539290   53912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:33:11.539718   53912 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:33:11.540377   53912 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:33:11.540437   53912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:33:11.540561   53912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33949
	I0924 19:33:11.540985   53912 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:33:11.541456   53912 main.go:141] libmachine: Using API Version  1
	I0924 19:33:11.541477   53912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:33:11.541822   53912 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:33:11.542043   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetState
	I0924 19:33:11.544435   53912 kapi.go:59] client config for kubernetes-upgrade-629510: &rest.Config{Host:"https://192.168.39.76:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/client.crt", KeyFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kubernetes-upgrade-629510/client.key", CAFile:"/home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 19:33:11.544685   53912 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-629510"
	W0924 19:33:11.544700   53912 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:33:11.544722   53912 host.go:66] Checking if "kubernetes-upgrade-629510" exists ...
	I0924 19:33:11.544985   53912 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:33:11.545024   53912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:33:11.559909   53912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I0924 19:33:11.559991   53912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41829
	I0924 19:33:11.560397   53912 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:33:11.560501   53912 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:33:11.560932   53912 main.go:141] libmachine: Using API Version  1
	I0924 19:33:11.560953   53912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:33:11.561106   53912 main.go:141] libmachine: Using API Version  1
	I0924 19:33:11.561132   53912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:33:11.561446   53912 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:33:11.561539   53912 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:33:11.561658   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetState
	I0924 19:33:11.562070   53912 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:33:11.562116   53912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:33:11.563533   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .DriverName
	I0924 19:33:11.565501   53912 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:33:08.549257   54332 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:33:08.549296   54332 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 19:33:08.549302   54332 cache.go:56] Caching tarball of preloaded images
	I0924 19:33:08.549387   54332 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 19:33:08.549401   54332 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 19:33:08.549484   54332 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/config.json ...
	I0924 19:33:08.549500   54332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/config.json: {Name:mkcb76609be1e456909c8460c9b1c546a9d3cfb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:33:08.549647   54332 start.go:360] acquireMachinesLock for auto-038637: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:33:11.907706   54332 start.go:364] duration metric: took 3.358013032s to acquireMachinesLock for "auto-038637"
	I0924 19:33:11.907774   54332 start.go:93] Provisioning new machine with config: &{Name:auto-038637 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:auto-038637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:33:11.907898   54332 start.go:125] createHost starting for "" (driver="kvm2")
	I0924 19:33:11.567305   53912 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:33:11.567324   53912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:33:11.567343   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHHostname
	I0924 19:33:11.570400   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:33:11.570978   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:33:11.571007   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:33:11.571248   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHPort
	I0924 19:33:11.571439   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:33:11.571621   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHUsername
	I0924 19:33:11.571791   53912 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/kubernetes-upgrade-629510/id_rsa Username:docker}
	I0924 19:33:11.585301   53912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I0924 19:33:11.585700   53912 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:33:11.586181   53912 main.go:141] libmachine: Using API Version  1
	I0924 19:33:11.586201   53912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:33:11.586605   53912 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:33:11.586801   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetState
	I0924 19:33:11.588823   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .DriverName
	I0924 19:33:11.589088   53912 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:33:11.589104   53912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:33:11.589122   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHHostname
	I0924 19:33:11.594009   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:33:11.594372   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:db:d8", ip: ""} in network mk-kubernetes-upgrade-629510: {Iface:virbr1 ExpiryTime:2024-09-24 20:27:24 +0000 UTC Type:0 Mac:52:54:00:0b:db:d8 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:kubernetes-upgrade-629510 Clientid:01:52:54:00:0b:db:d8}
	I0924 19:33:11.594394   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | domain kubernetes-upgrade-629510 has defined IP address 192.168.39.76 and MAC address 52:54:00:0b:db:d8 in network mk-kubernetes-upgrade-629510
	I0924 19:33:11.594635   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHPort
	I0924 19:33:11.594782   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHKeyPath
	I0924 19:33:11.594941   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .GetSSHUsername
	I0924 19:33:11.595095   53912 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/kubernetes-upgrade-629510/id_rsa Username:docker}
	I0924 19:33:11.752377   53912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:33:11.772622   53912 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:33:11.772695   53912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:33:11.785808   53912 api_server.go:72] duration metric: took 265.833017ms to wait for apiserver process to appear ...
	I0924 19:33:11.785833   53912 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:33:11.785854   53912 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0924 19:33:11.792361   53912 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0924 19:33:11.793563   53912 api_server.go:141] control plane version: v1.31.1
	I0924 19:33:11.793588   53912 api_server.go:131] duration metric: took 7.747387ms to wait for apiserver health ...
	I0924 19:33:11.793599   53912 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:33:11.802253   53912 system_pods.go:59] 8 kube-system pods found
	I0924 19:33:11.802285   53912 system_pods.go:61] "coredns-7c65d6cfc9-4dv5g" [52302d2b-8d0a-430f-8a44-2239049498bf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:33:11.802295   53912 system_pods.go:61] "coredns-7c65d6cfc9-fblwg" [18f3f174-56e8-4534-9316-aee8bfc4740e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:33:11.802307   53912 system_pods.go:61] "etcd-kubernetes-upgrade-629510" [2c0a1ad8-1593-4035-b897-a60bf23b1b7d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:33:11.802316   53912 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-629510" [462cdd76-29e5-4905-a11c-24c1ce426603] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:33:11.802329   53912 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-629510" [d428b245-7b0d-46fe-bc26-96bb50b0d232] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:33:11.802335   53912 system_pods.go:61] "kube-proxy-fq4b4" [2b5c3aab-5d51-44e1-94d5-99d6ecc8b772] Running
	I0924 19:33:11.802348   53912 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-629510" [3db97de6-d0a2-460b-bfbd-9d0b38eac912] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:33:11.802354   53912 system_pods.go:61] "storage-provisioner" [858d52d1-814f-4f13-8d78-7e9a7d28731f] Running
	I0924 19:33:11.802364   53912 system_pods.go:74] duration metric: took 8.758627ms to wait for pod list to return data ...
	I0924 19:33:11.802376   53912 kubeadm.go:582] duration metric: took 282.404264ms to wait for: map[apiserver:true system_pods:true]
	I0924 19:33:11.802394   53912 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:33:11.807914   53912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:33:11.807931   53912 node_conditions.go:123] node cpu capacity is 2
	I0924 19:33:11.807939   53912 node_conditions.go:105] duration metric: took 5.541661ms to run NodePressure ...
	I0924 19:33:11.807948   53912 start.go:241] waiting for startup goroutines ...
	I0924 19:33:11.832963   53912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:33:11.998817   53912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:33:12.091847   53912 main.go:141] libmachine: Making call to close driver server
	I0924 19:33:12.091887   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .Close
	I0924 19:33:12.092185   53912 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:33:12.092206   53912 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:33:12.092214   53912 main.go:141] libmachine: Making call to close driver server
	I0924 19:33:12.092218   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | Closing plugin on server side
	I0924 19:33:12.092221   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .Close
	I0924 19:33:12.092487   53912 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:33:12.092501   53912 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:33:12.116627   53912 main.go:141] libmachine: Making call to close driver server
	I0924 19:33:12.116652   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .Close
	I0924 19:33:12.116972   53912 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:33:12.116991   53912 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:33:12.796940   53912 main.go:141] libmachine: Making call to close driver server
	I0924 19:33:12.796966   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .Close
	I0924 19:33:12.797293   53912 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:33:12.797311   53912 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:33:12.797321   53912 main.go:141] libmachine: Making call to close driver server
	I0924 19:33:12.797329   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) Calling .Close
	I0924 19:33:12.798929   53912 main.go:141] libmachine: (kubernetes-upgrade-629510) DBG | Closing plugin on server side
	I0924 19:33:12.798988   53912 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:33:12.799005   53912 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:33:12.800645   53912 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0924 19:33:12.802278   53912 addons.go:510] duration metric: took 1.28226395s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0924 19:33:12.802327   53912 start.go:246] waiting for cluster config update ...
	I0924 19:33:12.802343   53912 start.go:255] writing updated cluster config ...
	I0924 19:33:12.802662   53912 ssh_runner.go:195] Run: rm -f paused
	I0924 19:33:12.861363   53912 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:33:12.863352   53912 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-629510" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.633893555Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727206393631137308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24d27a26-d0a7-45e0-bb40-40fd33906221 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.636417119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1cfb4eb2-84d7-4c8e-b167-ba60ea678174 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.636521757Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1cfb4eb2-84d7-4c8e-b167-ba60ea678174 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.636961595Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a5b338e3dde77de8a521b43487dfe0756112a956de04ea168203d5b3b1a9e31,PodSandboxId:1e31a328c5512eabc44aecebe937ae9f7beaeefbbc162d6bf7bcfd463f1822f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727206390394986794,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 858d52d1-814f-4f13-8d78-7e9a7d28731f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61824d3852468364ce788600c9bc39d35adfe992401a7c68650bb87028da2461,PodSandboxId:c29a0fc8e4d36d853189758ae8b2c7e87af90eaf0eb67cb269f6dc927714587b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727206390378839749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fblwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18f3f174-56e8-4534-9316-aee8bfc4740e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f32d83856f5c793e1112a9a734550b0a41dee691eb6d5204641f8641893afe,PodSandboxId:36d8327e63cad8aef9e7511e48d1abfd7abcb46e0bfc9db221298687a08c3de7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727206390390187794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4dv5g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 52302d2b-8d0a-430f-8a44-2239049498bf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f181ac153491f6c938209f6edb9ca74e0aab54a1545eee4190bcc869d88f352,PodSandboxId:de40f1ae0f65ac4724459c9c4247c72aac8b4f89bd1a07ea00b6c221bf3a0a75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1727206390373170979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fq4b4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b5c3aab-5d51-44e1-94d5-99d6ecc8b772,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1526d11e7484015af7a0d5b9d384fa27d3b21ec2530598a5ccfd92eca9c000e,PodSandboxId:18bf6a2f868a7a13e2faf78cb9ff083511f1dc7dbb639db9010d40828540fd5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727206386538040933,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb6c6b1d84d0dad30db0a44022e8a1b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db143bd97a4287b80e208be9c00a02ccffad1f23a4804498f33e62cefbd2a039,PodSandboxId:4bbea1bc69d4d8ca4794ca37adcbe9f5cc8548c8c2da0f862f80a3777c3ef43f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727206386565527480,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556dc119aa1be2af9c0071442085ad05,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab7f0a7b42cf22b34a8611398011cb58f0a8cf9cc6c5b531fc4df4dab760f8d,PodSandboxId:a2f11d4dab752f8e4fc12a1d3ad9d9388184e45ba27c8a31d8b5396547a3a154,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727206386570284876,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dbedd3bc377cb60cc9e4dcd192d61e8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9684bc750558aa2f0886f564c9cc108923fa1d88b3ca1b04019e216e3062e4eb,PodSandboxId:c9425564b4533c18eeeb36fe435e034addadbc0cd1cad80362f564d67a59af61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727206386530498172,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef1c29b2eac2086834044dddbe2a0d44,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3c815405b1908d3d2163c4539c8fe5bc6f9944cb90269b6f791416e09bb746,PodSandboxId:0a5c9838796541fda11cd9c3eab05ee63def350f14f8be2badb5e75a3787edb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727206381340675977,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fblwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18f3f174-56e8-4534-9316-aee8bfc4740e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1802918539ab9fdecc7908ed633da83c64f3f3a81e601b00aab1dcbba9e09aec,PodSandboxId:af9850bcdca39257f5440b969b17ccab0867272b8acdaa9459299d6efcf1308d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727206381046637594,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4dv5g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52302d2b-8d0a-430f-8a44-2239049498bf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50034d46ca7534b29d012f3dfa51b8ac9695f0a5d60300afba9a710d7f03684,PodSandboxId:2c9c9d7cf7b87060739d6978e5189ac407682fb4ee6418
ae0e87632f7eda305e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727206380487052763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fq4b4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b5c3aab-5d51-44e1-94d5-99d6ecc8b772,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82e7b5072d4ac8b7f76d20fe92f2c0583ef9a3a658ef715b25e44932cbc19b9,PodSandboxId:4bfd15546871e20561ac376441fbefa161b82b7f0e9973e20d49f6f6f5896011,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727206380310773133,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 858d52d1-814f-4f13-8d78-7e9a7d28731f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9835079f5523c686fc5e02d5d0582267db5d6fd8396b669bf37f3a7bda3f0f4c,PodSandboxId:5e5fbb99c606df0ffa2861185a5bfa3245d765daf7e6e785b25b8c8e31d6e6b8,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727206380287332526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556dc119aa1be2af9c0071442085ad05,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7aa833ae29430f0b54980fef4f64fe773a5d878cda2af42de56c7b69518586,PodSandboxId:0a1dda18b181860c5f323fe1213a1ccc86da45a0161c36edf4004adda22af7c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Imag
e:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727206380225015090,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef1c29b2eac2086834044dddbe2a0d44,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:052ca8d059f75fbb3e968a47d239dd18e9d82ff6206e99ac9c97717e9925dfe6,PodSandboxId:a3416efb91fbfd874fe676fb494a5cdf2733775f08e11704bdf5cb3cdc4365f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Ima
geSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727206380148180896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb6c6b1d84d0dad30db0a44022e8a1b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc513c95cd7bf0fd65060f2a9764afeb15c1a5b50a8d87ee6e7664e9a6c4233f,PodSandboxId:0a7c74ab0955f97b025c304180ba00b29bf84af94fc3d470ae4c251f0627f04d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&
ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727206380126715848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dbedd3bc377cb60cc9e4dcd192d61e8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1cfb4eb2-84d7-4c8e-b167-ba60ea678174 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.683522454Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d7c955c9-9db8-47cc-9f53-7bb9fc6ddf56 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.683605368Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d7c955c9-9db8-47cc-9f53-7bb9fc6ddf56 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.684839537Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4817dbd-1a9f-4d53-b34f-c2abab3e8ee2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.685433441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727206393685404277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4817dbd-1a9f-4d53-b34f-c2abab3e8ee2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.686031319Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4420173-1ba8-46c3-b6c3-3e1d13890751 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.686126186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4420173-1ba8-46c3-b6c3-3e1d13890751 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.686656994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a5b338e3dde77de8a521b43487dfe0756112a956de04ea168203d5b3b1a9e31,PodSandboxId:1e31a328c5512eabc44aecebe937ae9f7beaeefbbc162d6bf7bcfd463f1822f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727206390394986794,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 858d52d1-814f-4f13-8d78-7e9a7d28731f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61824d3852468364ce788600c9bc39d35adfe992401a7c68650bb87028da2461,PodSandboxId:c29a0fc8e4d36d853189758ae8b2c7e87af90eaf0eb67cb269f6dc927714587b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727206390378839749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fblwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18f3f174-56e8-4534-9316-aee8bfc4740e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f32d83856f5c793e1112a9a734550b0a41dee691eb6d5204641f8641893afe,PodSandboxId:36d8327e63cad8aef9e7511e48d1abfd7abcb46e0bfc9db221298687a08c3de7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727206390390187794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4dv5g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 52302d2b-8d0a-430f-8a44-2239049498bf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f181ac153491f6c938209f6edb9ca74e0aab54a1545eee4190bcc869d88f352,PodSandboxId:de40f1ae0f65ac4724459c9c4247c72aac8b4f89bd1a07ea00b6c221bf3a0a75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1727206390373170979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fq4b4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b5c3aab-5d51-44e1-94d5-99d6ecc8b772,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1526d11e7484015af7a0d5b9d384fa27d3b21ec2530598a5ccfd92eca9c000e,PodSandboxId:18bf6a2f868a7a13e2faf78cb9ff083511f1dc7dbb639db9010d40828540fd5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727206386538040933,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb6c6b1d84d0dad30db0a44022e8a1b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db143bd97a4287b80e208be9c00a02ccffad1f23a4804498f33e62cefbd2a039,PodSandboxId:4bbea1bc69d4d8ca4794ca37adcbe9f5cc8548c8c2da0f862f80a3777c3ef43f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727206386565527480,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556dc119aa1be2af9c0071442085ad05,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab7f0a7b42cf22b34a8611398011cb58f0a8cf9cc6c5b531fc4df4dab760f8d,PodSandboxId:a2f11d4dab752f8e4fc12a1d3ad9d9388184e45ba27c8a31d8b5396547a3a154,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727206386570284876,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dbedd3bc377cb60cc9e4dcd192d61e8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9684bc750558aa2f0886f564c9cc108923fa1d88b3ca1b04019e216e3062e4eb,PodSandboxId:c9425564b4533c18eeeb36fe435e034addadbc0cd1cad80362f564d67a59af61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727206386530498172,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef1c29b2eac2086834044dddbe2a0d44,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3c815405b1908d3d2163c4539c8fe5bc6f9944cb90269b6f791416e09bb746,PodSandboxId:0a5c9838796541fda11cd9c3eab05ee63def350f14f8be2badb5e75a3787edb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727206381340675977,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fblwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18f3f174-56e8-4534-9316-aee8bfc4740e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1802918539ab9fdecc7908ed633da83c64f3f3a81e601b00aab1dcbba9e09aec,PodSandboxId:af9850bcdca39257f5440b969b17ccab0867272b8acdaa9459299d6efcf1308d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727206381046637594,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4dv5g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52302d2b-8d0a-430f-8a44-2239049498bf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50034d46ca7534b29d012f3dfa51b8ac9695f0a5d60300afba9a710d7f03684,PodSandboxId:2c9c9d7cf7b87060739d6978e5189ac407682fb4ee6418
ae0e87632f7eda305e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727206380487052763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fq4b4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b5c3aab-5d51-44e1-94d5-99d6ecc8b772,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82e7b5072d4ac8b7f76d20fe92f2c0583ef9a3a658ef715b25e44932cbc19b9,PodSandboxId:4bfd15546871e20561ac376441fbefa161b82b7f0e9973e20d49f6f6f5896011,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727206380310773133,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 858d52d1-814f-4f13-8d78-7e9a7d28731f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9835079f5523c686fc5e02d5d0582267db5d6fd8396b669bf37f3a7bda3f0f4c,PodSandboxId:5e5fbb99c606df0ffa2861185a5bfa3245d765daf7e6e785b25b8c8e31d6e6b8,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727206380287332526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556dc119aa1be2af9c0071442085ad05,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7aa833ae29430f0b54980fef4f64fe773a5d878cda2af42de56c7b69518586,PodSandboxId:0a1dda18b181860c5f323fe1213a1ccc86da45a0161c36edf4004adda22af7c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Imag
e:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727206380225015090,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef1c29b2eac2086834044dddbe2a0d44,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:052ca8d059f75fbb3e968a47d239dd18e9d82ff6206e99ac9c97717e9925dfe6,PodSandboxId:a3416efb91fbfd874fe676fb494a5cdf2733775f08e11704bdf5cb3cdc4365f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Ima
geSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727206380148180896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb6c6b1d84d0dad30db0a44022e8a1b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc513c95cd7bf0fd65060f2a9764afeb15c1a5b50a8d87ee6e7664e9a6c4233f,PodSandboxId:0a7c74ab0955f97b025c304180ba00b29bf84af94fc3d470ae4c251f0627f04d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&
ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727206380126715848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dbedd3bc377cb60cc9e4dcd192d61e8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4420173-1ba8-46c3-b6c3-3e1d13890751 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.732822070Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66069391-5cdf-4d7c-9a1d-11e1e4360f8a name=/runtime.v1.RuntimeService/Version
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.732893819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66069391-5cdf-4d7c-9a1d-11e1e4360f8a name=/runtime.v1.RuntimeService/Version
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.735236927Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2aa8d10d-7301-462a-a8bf-799a7f8cd7f3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.736022117Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727206393735985488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2aa8d10d-7301-462a-a8bf-799a7f8cd7f3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.736838352Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=787b2f67-72dc-4956-b643-460d8efcacda name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.736920200Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=787b2f67-72dc-4956-b643-460d8efcacda name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.737647340Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a5b338e3dde77de8a521b43487dfe0756112a956de04ea168203d5b3b1a9e31,PodSandboxId:1e31a328c5512eabc44aecebe937ae9f7beaeefbbc162d6bf7bcfd463f1822f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727206390394986794,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 858d52d1-814f-4f13-8d78-7e9a7d28731f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61824d3852468364ce788600c9bc39d35adfe992401a7c68650bb87028da2461,PodSandboxId:c29a0fc8e4d36d853189758ae8b2c7e87af90eaf0eb67cb269f6dc927714587b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727206390378839749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fblwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18f3f174-56e8-4534-9316-aee8bfc4740e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f32d83856f5c793e1112a9a734550b0a41dee691eb6d5204641f8641893afe,PodSandboxId:36d8327e63cad8aef9e7511e48d1abfd7abcb46e0bfc9db221298687a08c3de7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727206390390187794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4dv5g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 52302d2b-8d0a-430f-8a44-2239049498bf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f181ac153491f6c938209f6edb9ca74e0aab54a1545eee4190bcc869d88f352,PodSandboxId:de40f1ae0f65ac4724459c9c4247c72aac8b4f89bd1a07ea00b6c221bf3a0a75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1727206390373170979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fq4b4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b5c3aab-5d51-44e1-94d5-99d6ecc8b772,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1526d11e7484015af7a0d5b9d384fa27d3b21ec2530598a5ccfd92eca9c000e,PodSandboxId:18bf6a2f868a7a13e2faf78cb9ff083511f1dc7dbb639db9010d40828540fd5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727206386538040933,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb6c6b1d84d0dad30db0a44022e8a1b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db143bd97a4287b80e208be9c00a02ccffad1f23a4804498f33e62cefbd2a039,PodSandboxId:4bbea1bc69d4d8ca4794ca37adcbe9f5cc8548c8c2da0f862f80a3777c3ef43f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727206386565527480,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556dc119aa1be2af9c0071442085ad05,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab7f0a7b42cf22b34a8611398011cb58f0a8cf9cc6c5b531fc4df4dab760f8d,PodSandboxId:a2f11d4dab752f8e4fc12a1d3ad9d9388184e45ba27c8a31d8b5396547a3a154,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727206386570284876,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dbedd3bc377cb60cc9e4dcd192d61e8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9684bc750558aa2f0886f564c9cc108923fa1d88b3ca1b04019e216e3062e4eb,PodSandboxId:c9425564b4533c18eeeb36fe435e034addadbc0cd1cad80362f564d67a59af61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727206386530498172,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef1c29b2eac2086834044dddbe2a0d44,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3c815405b1908d3d2163c4539c8fe5bc6f9944cb90269b6f791416e09bb746,PodSandboxId:0a5c9838796541fda11cd9c3eab05ee63def350f14f8be2badb5e75a3787edb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727206381340675977,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fblwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18f3f174-56e8-4534-9316-aee8bfc4740e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1802918539ab9fdecc7908ed633da83c64f3f3a81e601b00aab1dcbba9e09aec,PodSandboxId:af9850bcdca39257f5440b969b17ccab0867272b8acdaa9459299d6efcf1308d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727206381046637594,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4dv5g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52302d2b-8d0a-430f-8a44-2239049498bf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50034d46ca7534b29d012f3dfa51b8ac9695f0a5d60300afba9a710d7f03684,PodSandboxId:2c9c9d7cf7b87060739d6978e5189ac407682fb4ee6418
ae0e87632f7eda305e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727206380487052763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fq4b4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b5c3aab-5d51-44e1-94d5-99d6ecc8b772,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82e7b5072d4ac8b7f76d20fe92f2c0583ef9a3a658ef715b25e44932cbc19b9,PodSandboxId:4bfd15546871e20561ac376441fbefa161b82b7f0e9973e20d49f6f6f5896011,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727206380310773133,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 858d52d1-814f-4f13-8d78-7e9a7d28731f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9835079f5523c686fc5e02d5d0582267db5d6fd8396b669bf37f3a7bda3f0f4c,PodSandboxId:5e5fbb99c606df0ffa2861185a5bfa3245d765daf7e6e785b25b8c8e31d6e6b8,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727206380287332526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556dc119aa1be2af9c0071442085ad05,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7aa833ae29430f0b54980fef4f64fe773a5d878cda2af42de56c7b69518586,PodSandboxId:0a1dda18b181860c5f323fe1213a1ccc86da45a0161c36edf4004adda22af7c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Imag
e:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727206380225015090,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef1c29b2eac2086834044dddbe2a0d44,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:052ca8d059f75fbb3e968a47d239dd18e9d82ff6206e99ac9c97717e9925dfe6,PodSandboxId:a3416efb91fbfd874fe676fb494a5cdf2733775f08e11704bdf5cb3cdc4365f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Ima
geSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727206380148180896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb6c6b1d84d0dad30db0a44022e8a1b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc513c95cd7bf0fd65060f2a9764afeb15c1a5b50a8d87ee6e7664e9a6c4233f,PodSandboxId:0a7c74ab0955f97b025c304180ba00b29bf84af94fc3d470ae4c251f0627f04d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&
ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727206380126715848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dbedd3bc377cb60cc9e4dcd192d61e8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=787b2f67-72dc-4956-b643-460d8efcacda name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.773775493Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e9ea730-e417-46d7-b9d9-154ecafa54a1 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.773855884Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e9ea730-e417-46d7-b9d9-154ecafa54a1 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.775108951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1539f454-631d-4a06-b7a0-267aa99b5c03 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.775485614Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727206393775454040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1539f454-631d-4a06-b7a0-267aa99b5c03 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.775976764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9c3634e-1f73-4ace-9db5-1f9f0a3d354b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.776040444Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9c3634e-1f73-4ace-9db5-1f9f0a3d354b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:33:13 kubernetes-upgrade-629510 crio[3018]: time="2024-09-24 19:33:13.776522772Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a5b338e3dde77de8a521b43487dfe0756112a956de04ea168203d5b3b1a9e31,PodSandboxId:1e31a328c5512eabc44aecebe937ae9f7beaeefbbc162d6bf7bcfd463f1822f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727206390394986794,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 858d52d1-814f-4f13-8d78-7e9a7d28731f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61824d3852468364ce788600c9bc39d35adfe992401a7c68650bb87028da2461,PodSandboxId:c29a0fc8e4d36d853189758ae8b2c7e87af90eaf0eb67cb269f6dc927714587b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727206390378839749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fblwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18f3f174-56e8-4534-9316-aee8bfc4740e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f32d83856f5c793e1112a9a734550b0a41dee691eb6d5204641f8641893afe,PodSandboxId:36d8327e63cad8aef9e7511e48d1abfd7abcb46e0bfc9db221298687a08c3de7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727206390390187794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4dv5g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 52302d2b-8d0a-430f-8a44-2239049498bf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f181ac153491f6c938209f6edb9ca74e0aab54a1545eee4190bcc869d88f352,PodSandboxId:de40f1ae0f65ac4724459c9c4247c72aac8b4f89bd1a07ea00b6c221bf3a0a75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1727206390373170979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fq4b4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b5c3aab-5d51-44e1-94d5-99d6ecc8b772,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1526d11e7484015af7a0d5b9d384fa27d3b21ec2530598a5ccfd92eca9c000e,PodSandboxId:18bf6a2f868a7a13e2faf78cb9ff083511f1dc7dbb639db9010d40828540fd5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727206386538040933,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb6c6b1d84d0dad30db0a44022e8a1b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db143bd97a4287b80e208be9c00a02ccffad1f23a4804498f33e62cefbd2a039,PodSandboxId:4bbea1bc69d4d8ca4794ca37adcbe9f5cc8548c8c2da0f862f80a3777c3ef43f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727206386565527480,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556dc119aa1be2af9c0071442085ad05,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab7f0a7b42cf22b34a8611398011cb58f0a8cf9cc6c5b531fc4df4dab760f8d,PodSandboxId:a2f11d4dab752f8e4fc12a1d3ad9d9388184e45ba27c8a31d8b5396547a3a154,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727206386570284876,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dbedd3bc377cb60cc9e4dcd192d61e8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9684bc750558aa2f0886f564c9cc108923fa1d88b3ca1b04019e216e3062e4eb,PodSandboxId:c9425564b4533c18eeeb36fe435e034addadbc0cd1cad80362f564d67a59af61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727206386530498172,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef1c29b2eac2086834044dddbe2a0d44,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3c815405b1908d3d2163c4539c8fe5bc6f9944cb90269b6f791416e09bb746,PodSandboxId:0a5c9838796541fda11cd9c3eab05ee63def350f14f8be2badb5e75a3787edb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727206381340675977,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fblwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18f3f174-56e8-4534-9316-aee8bfc4740e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1802918539ab9fdecc7908ed633da83c64f3f3a81e601b00aab1dcbba9e09aec,PodSandboxId:af9850bcdca39257f5440b969b17ccab0867272b8acdaa9459299d6efcf1308d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727206381046637594,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4dv5g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52302d2b-8d0a-430f-8a44-2239049498bf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50034d46ca7534b29d012f3dfa51b8ac9695f0a5d60300afba9a710d7f03684,PodSandboxId:2c9c9d7cf7b87060739d6978e5189ac407682fb4ee6418
ae0e87632f7eda305e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727206380487052763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fq4b4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b5c3aab-5d51-44e1-94d5-99d6ecc8b772,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82e7b5072d4ac8b7f76d20fe92f2c0583ef9a3a658ef715b25e44932cbc19b9,PodSandboxId:4bfd15546871e20561ac376441fbefa161b82b7f0e9973e20d49f6f6f5896011,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727206380310773133,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 858d52d1-814f-4f13-8d78-7e9a7d28731f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9835079f5523c686fc5e02d5d0582267db5d6fd8396b669bf37f3a7bda3f0f4c,PodSandboxId:5e5fbb99c606df0ffa2861185a5bfa3245d765daf7e6e785b25b8c8e31d6e6b8,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727206380287332526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556dc119aa1be2af9c0071442085ad05,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7aa833ae29430f0b54980fef4f64fe773a5d878cda2af42de56c7b69518586,PodSandboxId:0a1dda18b181860c5f323fe1213a1ccc86da45a0161c36edf4004adda22af7c4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Imag
e:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727206380225015090,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef1c29b2eac2086834044dddbe2a0d44,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:052ca8d059f75fbb3e968a47d239dd18e9d82ff6206e99ac9c97717e9925dfe6,PodSandboxId:a3416efb91fbfd874fe676fb494a5cdf2733775f08e11704bdf5cb3cdc4365f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Ima
geSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727206380148180896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb6c6b1d84d0dad30db0a44022e8a1b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc513c95cd7bf0fd65060f2a9764afeb15c1a5b50a8d87ee6e7664e9a6c4233f,PodSandboxId:0a7c74ab0955f97b025c304180ba00b29bf84af94fc3d470ae4c251f0627f04d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&
ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727206380126715848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-629510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dbedd3bc377cb60cc9e4dcd192d61e8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9c3634e-1f73-4ace-9db5-1f9f0a3d354b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1a5b338e3dde7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   1e31a328c5512       storage-provisioner
	11f32d83856f5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   36d8327e63cad       coredns-7c65d6cfc9-4dv5g
	61824d3852468       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   c29a0fc8e4d36       coredns-7c65d6cfc9-fblwg
	8f181ac153491       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   3 seconds ago       Running             kube-proxy                2                   de40f1ae0f65a       kube-proxy-fq4b4
	fab7f0a7b42cf       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago       Running             kube-controller-manager   2                   a2f11d4dab752       kube-controller-manager-kubernetes-upgrade-629510
	db143bd97a428       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   4bbea1bc69d4d       etcd-kubernetes-upgrade-629510
	e1526d11e7484       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago       Running             kube-scheduler            2                   18bf6a2f868a7       kube-scheduler-kubernetes-upgrade-629510
	9684bc750558a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago       Running             kube-apiserver            2                   c9425564b4533       kube-apiserver-kubernetes-upgrade-629510
	0d3c815405b19       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   12 seconds ago      Exited              coredns                   1                   0a5c983879654       coredns-7c65d6cfc9-fblwg
	1802918539ab9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   12 seconds ago      Exited              coredns                   1                   af9850bcdca39       coredns-7c65d6cfc9-4dv5g
	d50034d46ca75       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   13 seconds ago      Exited              kube-proxy                1                   2c9c9d7cf7b87       kube-proxy-fq4b4
	d82e7b5072d4a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Exited              storage-provisioner       1                   4bfd15546871e       storage-provisioner
	9835079f5523c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   13 seconds ago      Exited              etcd                      1                   5e5fbb99c606d       etcd-kubernetes-upgrade-629510
	ec7aa833ae294       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   13 seconds ago      Exited              kube-apiserver            1                   0a1dda18b1818       kube-apiserver-kubernetes-upgrade-629510
	052ca8d059f75       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   13 seconds ago      Exited              kube-scheduler            1                   a3416efb91fbf       kube-scheduler-kubernetes-upgrade-629510
	dc513c95cd7bf       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   13 seconds ago      Exited              kube-controller-manager   1                   0a7c74ab0955f       kube-controller-manager-kubernetes-upgrade-629510
	
	
	==> coredns [0d3c815405b1908d3d2163c4539c8fe5bc6f9944cb90269b6f791416e09bb746] <==
	
	
	==> coredns [11f32d83856f5c793e1112a9a734550b0a41dee691eb6d5204641f8641893afe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [1802918539ab9fdecc7908ed633da83c64f3f3a81e601b00aab1dcbba9e09aec] <==
	
	
	==> coredns [61824d3852468364ce788600c9bc39d35adfe992401a7c68650bb87028da2461] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-629510
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-629510
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 19:32:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-629510
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 19:33:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 19:33:09 +0000   Tue, 24 Sep 2024 19:32:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 19:33:09 +0000   Tue, 24 Sep 2024 19:32:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 19:33:09 +0000   Tue, 24 Sep 2024 19:32:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 19:33:09 +0000   Tue, 24 Sep 2024 19:32:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    kubernetes-upgrade-629510
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c6b6efe5c9d43a4b7d46168f3c130a2
	  System UUID:                6c6b6efe-5c9d-43a4-b7d4-6168f3c130a2
	  Boot ID:                    8afa9116-257d-4863-8a50-f04fa87080a2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-4dv5g                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     30s
	  kube-system                 coredns-7c65d6cfc9-fblwg                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     30s
	  kube-system                 etcd-kubernetes-upgrade-629510                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         34s
	  kube-system                 kube-apiserver-kubernetes-upgrade-629510             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-629510    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-fq4b4                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-kubernetes-upgrade-629510             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  41s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    41s (x8 over 42s)  kubelet          Node kubernetes-upgrade-629510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x7 over 42s)  kubelet          Node kubernetes-upgrade-629510 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  41s (x8 over 42s)  kubelet          Node kubernetes-upgrade-629510 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           31s                node-controller  Node kubernetes-upgrade-629510 event: Registered Node kubernetes-upgrade-629510 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-629510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-629510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-629510 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-629510 event: Registered Node kubernetes-upgrade-629510 in Controller
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.492080] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.064209] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062724] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.193610] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.120211] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.288016] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +4.041141] systemd-fstab-generator[718]: Ignoring "noauto" option for root device
	[  +2.016709] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +0.058460] kauditd_printk_skb: 158 callbacks suppressed
	[  +9.041149] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.081239] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.167256] kauditd_printk_skb: 65 callbacks suppressed
	[ +13.000547] systemd-fstab-generator[2172]: Ignoring "noauto" option for root device
	[  +0.100371] kauditd_printk_skb: 36 callbacks suppressed
	[  +0.086621] systemd-fstab-generator[2196]: Ignoring "noauto" option for root device
	[  +0.499507] systemd-fstab-generator[2436]: Ignoring "noauto" option for root device
	[Sep24 19:33] systemd-fstab-generator[2574]: Ignoring "noauto" option for root device
	[  +0.763538] systemd-fstab-generator[2896]: Ignoring "noauto" option for root device
	[  +1.242721] systemd-fstab-generator[3339]: Ignoring "noauto" option for root device
	[  +3.298385] systemd-fstab-generator[3937]: Ignoring "noauto" option for root device
	[  +0.141126] kauditd_printk_skb: 302 callbacks suppressed
	[  +5.696956] systemd-fstab-generator[4482]: Ignoring "noauto" option for root device
	[  +0.121587] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [9835079f5523c686fc5e02d5d0582267db5d6fd8396b669bf37f3a7bda3f0f4c] <==
	{"level":"info","ts":"2024-09-24T19:33:00.854342Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-24T19:33:00.873745Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"1be8679029844888","local-member-id":"4f06aa0eaa8889d9","commit-index":397}
	{"level":"info","ts":"2024-09-24T19:33:00.873876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-24T19:33:00.873897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 became follower at term 2"}
	{"level":"info","ts":"2024-09-24T19:33:00.873907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 4f06aa0eaa8889d9 [peers: [], term: 2, commit: 397, applied: 0, lastindex: 397, lastterm: 2]"}
	{"level":"warn","ts":"2024-09-24T19:33:00.882388Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-24T19:33:00.905646Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":388}
	{"level":"info","ts":"2024-09-24T19:33:00.912149Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-24T19:33:00.919248Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"4f06aa0eaa8889d9","timeout":"7s"}
	{"level":"info","ts":"2024-09-24T19:33:00.919523Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"4f06aa0eaa8889d9"}
	{"level":"info","ts":"2024-09-24T19:33:00.919557Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"4f06aa0eaa8889d9","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-24T19:33:00.919999Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:33:00.924815Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-24T19:33:00.924966Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-24T19:33:00.924996Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-24T19:33:00.925004Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-24T19:33:00.927421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 switched to configuration voters=(5694425758823909849)"}
	{"level":"info","ts":"2024-09-24T19:33:00.927670Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1be8679029844888","local-member-id":"4f06aa0eaa8889d9","added-peer-id":"4f06aa0eaa8889d9","added-peer-peer-urls":["https://192.168.39.76:2380"]}
	{"level":"info","ts":"2024-09-24T19:33:00.927765Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1be8679029844888","local-member-id":"4f06aa0eaa8889d9","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:33:00.928223Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:33:00.937736Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-24T19:33:00.937795Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.76:2380"}
	{"level":"info","ts":"2024-09-24T19:33:00.937804Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.76:2380"}
	{"level":"info","ts":"2024-09-24T19:33:00.946459Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"4f06aa0eaa8889d9","initial-advertise-peer-urls":["https://192.168.39.76:2380"],"listen-peer-urls":["https://192.168.39.76:2380"],"advertise-client-urls":["https://192.168.39.76:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.76:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T19:33:00.946488Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [db143bd97a4287b80e208be9c00a02ccffad1f23a4804498f33e62cefbd2a039] <==
	{"level":"info","ts":"2024-09-24T19:33:06.893777Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1be8679029844888","local-member-id":"4f06aa0eaa8889d9","added-peer-id":"4f06aa0eaa8889d9","added-peer-peer-urls":["https://192.168.39.76:2380"]}
	{"level":"info","ts":"2024-09-24T19:33:06.893882Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1be8679029844888","local-member-id":"4f06aa0eaa8889d9","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:33:06.893920Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:33:06.901012Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:33:06.903285Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-24T19:33:06.903525Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"4f06aa0eaa8889d9","initial-advertise-peer-urls":["https://192.168.39.76:2380"],"listen-peer-urls":["https://192.168.39.76:2380"],"advertise-client-urls":["https://192.168.39.76:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.76:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T19:33:06.903572Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-24T19:33:06.903724Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.76:2380"}
	{"level":"info","ts":"2024-09-24T19:33:06.903753Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.76:2380"}
	{"level":"info","ts":"2024-09-24T19:33:08.561124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-24T19:33:08.561242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-24T19:33:08.561303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 received MsgPreVoteResp from 4f06aa0eaa8889d9 at term 2"}
	{"level":"info","ts":"2024-09-24T19:33:08.561351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 became candidate at term 3"}
	{"level":"info","ts":"2024-09-24T19:33:08.561377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 received MsgVoteResp from 4f06aa0eaa8889d9 at term 3"}
	{"level":"info","ts":"2024-09-24T19:33:08.561422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 became leader at term 3"}
	{"level":"info","ts":"2024-09-24T19:33:08.561455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4f06aa0eaa8889d9 elected leader 4f06aa0eaa8889d9 at term 3"}
	{"level":"info","ts":"2024-09-24T19:33:08.566469Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4f06aa0eaa8889d9","local-member-attributes":"{Name:kubernetes-upgrade-629510 ClientURLs:[https://192.168.39.76:2379]}","request-path":"/0/members/4f06aa0eaa8889d9/attributes","cluster-id":"1be8679029844888","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T19:33:08.566485Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:33:08.566792Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T19:33:08.566848Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T19:33:08.566521Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:33:08.567692Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:33:08.567855Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:33:08.568615Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.76:2379"}
	{"level":"info","ts":"2024-09-24T19:33:08.568975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:33:14 up 1 min,  0 users,  load average: 0.97, 0.26, 0.09
	Linux kubernetes-upgrade-629510 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9684bc750558aa2f0886f564c9cc108923fa1d88b3ca1b04019e216e3062e4eb] <==
	I0924 19:33:09.832486       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0924 19:33:09.832571       1 policy_source.go:224] refreshing policies
	I0924 19:33:09.871470       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0924 19:33:09.889354       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0924 19:33:09.889430       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0924 19:33:09.890176       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0924 19:33:09.890302       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0924 19:33:09.890479       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0924 19:33:09.890790       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0924 19:33:09.890845       1 aggregator.go:171] initial CRD sync complete...
	I0924 19:33:09.890869       1 autoregister_controller.go:144] Starting autoregister controller
	I0924 19:33:09.890890       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0924 19:33:09.890912       1 cache.go:39] Caches are synced for autoregister controller
	I0924 19:33:09.891042       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0924 19:33:09.891384       1 shared_informer.go:320] Caches are synced for configmaps
	I0924 19:33:09.895859       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0924 19:33:09.895980       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0924 19:33:10.607333       1 controller.go:615] quota admission added evaluator for: endpoints
	I0924 19:33:10.704840       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0924 19:33:11.316212       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0924 19:33:11.334916       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0924 19:33:11.400635       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0924 19:33:11.480578       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 19:33:11.487925       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0924 19:33:13.293749       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [ec7aa833ae29430f0b54980fef4f64fe773a5d878cda2af42de56c7b69518586] <==
	I0924 19:33:01.239734       1 options.go:228] external host was not specified, using 192.168.39.76
	I0924 19:33:01.245460       1 server.go:142] Version: v1.31.1
	I0924 19:33:01.245608       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [dc513c95cd7bf0fd65060f2a9764afeb15c1a5b50a8d87ee6e7664e9a6c4233f] <==
	
	
	==> kube-controller-manager [fab7f0a7b42cf22b34a8611398011cb58f0a8cf9cc6c5b531fc4df4dab760f8d] <==
	I0924 19:33:13.239808       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0924 19:33:13.249258       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0924 19:33:13.250053       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-629510"
	I0924 19:33:13.258138       1 shared_informer.go:320] Caches are synced for persistent volume
	I0924 19:33:13.277461       1 shared_informer.go:320] Caches are synced for GC
	I0924 19:33:13.283771       1 shared_informer.go:320] Caches are synced for resource quota
	I0924 19:33:13.287224       1 shared_informer.go:320] Caches are synced for disruption
	I0924 19:33:13.288384       1 shared_informer.go:320] Caches are synced for attach detach
	I0924 19:33:13.288659       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0924 19:33:13.289132       1 shared_informer.go:320] Caches are synced for deployment
	I0924 19:33:13.289378       1 shared_informer.go:320] Caches are synced for daemon sets
	I0924 19:33:13.289913       1 shared_informer.go:320] Caches are synced for PVC protection
	I0924 19:33:13.292129       1 shared_informer.go:320] Caches are synced for stateful set
	I0924 19:33:13.305376       1 shared_informer.go:320] Caches are synced for resource quota
	I0924 19:33:13.337670       1 shared_informer.go:320] Caches are synced for taint
	I0924 19:33:13.337804       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0924 19:33:13.337880       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-629510"
	I0924 19:33:13.337925       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0924 19:33:13.450464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="161.728251ms"
	I0924 19:33:13.450918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="59.481µs"
	I0924 19:33:13.712200       1 shared_informer.go:320] Caches are synced for garbage collector
	I0924 19:33:13.712265       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0924 19:33:13.739187       1 shared_informer.go:320] Caches are synced for garbage collector
	I0924 19:33:14.063735       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.395707ms"
	I0924 19:33:14.063847       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="61.623µs"
	
	
	==> kube-proxy [8f181ac153491f6c938209f6edb9ca74e0aab54a1545eee4190bcc869d88f352] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 19:33:10.640496       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 19:33:10.656459       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.76"]
	E0924 19:33:10.656576       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 19:33:10.701760       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 19:33:10.701836       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 19:33:10.701860       1 server_linux.go:169] "Using iptables Proxier"
	I0924 19:33:10.705365       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 19:33:10.705662       1 server.go:483] "Version info" version="v1.31.1"
	I0924 19:33:10.705687       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:33:10.708196       1 config.go:199] "Starting service config controller"
	I0924 19:33:10.708223       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 19:33:10.708247       1 config.go:105] "Starting endpoint slice config controller"
	I0924 19:33:10.708253       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 19:33:10.708765       1 config.go:328] "Starting node config controller"
	I0924 19:33:10.708794       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 19:33:10.809172       1 shared_informer.go:320] Caches are synced for node config
	I0924 19:33:10.809214       1 shared_informer.go:320] Caches are synced for service config
	I0924 19:33:10.809248       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d50034d46ca7534b29d012f3dfa51b8ac9695f0a5d60300afba9a710d7f03684] <==
	
	
	==> kube-scheduler [052ca8d059f75fbb3e968a47d239dd18e9d82ff6206e99ac9c97717e9925dfe6] <==
	
	
	==> kube-scheduler [e1526d11e7484015af7a0d5b9d384fa27d3b21ec2530598a5ccfd92eca9c000e] <==
	I0924 19:33:07.590939       1 serving.go:386] Generated self-signed cert in-memory
	W0924 19:33:09.730844       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 19:33:09.730988       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 19:33:09.731051       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 19:33:09.731100       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 19:33:09.785634       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0924 19:33:09.790102       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:33:09.797669       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0924 19:33:09.800338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0924 19:33:09.801875       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 19:33:09.801170       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W0924 19:33:09.811092       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0924 19:33:09.814656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0924 19:33:09.915161       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 19:33:06 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:06.494458    3944 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-629510"
	Sep 24 19:33:06 kubernetes-upgrade-629510 kubelet[3944]: E0924 19:33:06.495605    3944 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.76:8443: connect: connection refused" node="kubernetes-upgrade-629510"
	Sep 24 19:33:06 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:06.506952    3944 scope.go:117] "RemoveContainer" containerID="ec7aa833ae29430f0b54980fef4f64fe773a5d878cda2af42de56c7b69518586"
	Sep 24 19:33:06 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:06.510238    3944 scope.go:117] "RemoveContainer" containerID="052ca8d059f75fbb3e968a47d239dd18e9d82ff6206e99ac9c97717e9925dfe6"
	Sep 24 19:33:06 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:06.518381    3944 scope.go:117] "RemoveContainer" containerID="9835079f5523c686fc5e02d5d0582267db5d6fd8396b669bf37f3a7bda3f0f4c"
	Sep 24 19:33:06 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:06.522192    3944 scope.go:117] "RemoveContainer" containerID="dc513c95cd7bf0fd65060f2a9764afeb15c1a5b50a8d87ee6e7664e9a6c4233f"
	Sep 24 19:33:06 kubernetes-upgrade-629510 kubelet[3944]: E0924 19:33:06.701795    3944 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-629510?timeout=10s\": dial tcp 192.168.39.76:8443: connect: connection refused" interval="800ms"
	Sep 24 19:33:06 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:06.897234    3944 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-629510"
	Sep 24 19:33:06 kubernetes-upgrade-629510 kubelet[3944]: E0924 19:33:06.897958    3944 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.76:8443: connect: connection refused" node="kubernetes-upgrade-629510"
	Sep 24 19:33:07 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:07.700140    3944 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-629510"
	Sep 24 19:33:09 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:09.862443    3944 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-629510"
	Sep 24 19:33:09 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:09.862533    3944 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-629510"
	Sep 24 19:33:09 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:09.862556    3944 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 24 19:33:09 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:09.863661    3944 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 24 19:33:10 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:10.051945    3944 apiserver.go:52] "Watching apiserver"
	Sep 24 19:33:10 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:10.077887    3944 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 24 19:33:10 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:10.159032    3944 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/858d52d1-814f-4f13-8d78-7e9a7d28731f-tmp\") pod \"storage-provisioner\" (UID: \"858d52d1-814f-4f13-8d78-7e9a7d28731f\") " pod="kube-system/storage-provisioner"
	Sep 24 19:33:10 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:10.159135    3944 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b5c3aab-5d51-44e1-94d5-99d6ecc8b772-xtables-lock\") pod \"kube-proxy-fq4b4\" (UID: \"2b5c3aab-5d51-44e1-94d5-99d6ecc8b772\") " pod="kube-system/kube-proxy-fq4b4"
	Sep 24 19:33:10 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:10.159154    3944 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b5c3aab-5d51-44e1-94d5-99d6ecc8b772-lib-modules\") pod \"kube-proxy-fq4b4\" (UID: \"2b5c3aab-5d51-44e1-94d5-99d6ecc8b772\") " pod="kube-system/kube-proxy-fq4b4"
	Sep 24 19:33:10 kubernetes-upgrade-629510 kubelet[3944]: E0924 19:33:10.295902    3944 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-629510\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-629510"
	Sep 24 19:33:10 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:10.355959    3944 scope.go:117] "RemoveContainer" containerID="d50034d46ca7534b29d012f3dfa51b8ac9695f0a5d60300afba9a710d7f03684"
	Sep 24 19:33:10 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:10.356267    3944 scope.go:117] "RemoveContainer" containerID="0d3c815405b1908d3d2163c4539c8fe5bc6f9944cb90269b6f791416e09bb746"
	Sep 24 19:33:10 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:10.356510    3944 scope.go:117] "RemoveContainer" containerID="d82e7b5072d4ac8b7f76d20fe92f2c0583ef9a3a658ef715b25e44932cbc19b9"
	Sep 24 19:33:10 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:10.356706    3944 scope.go:117] "RemoveContainer" containerID="1802918539ab9fdecc7908ed633da83c64f3f3a81e601b00aab1dcbba9e09aec"
	Sep 24 19:33:14 kubernetes-upgrade-629510 kubelet[3944]: I0924 19:33:14.033177    3944 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [1a5b338e3dde77de8a521b43487dfe0756112a956de04ea168203d5b3b1a9e31] <==
	I0924 19:33:10.582442       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 19:33:10.598592       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 19:33:10.598664       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 19:33:10.612029       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 19:33:10.612300       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-629510_bbe15c6b-5a41-4f28-b763-4e0b08ad8d9c!
	I0924 19:33:10.613158       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99199057-c4e7-4484-8a00-82f9c1afcb8d", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-629510_bbe15c6b-5a41-4f28-b763-4e0b08ad8d9c became leader
	I0924 19:33:10.714181       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-629510_bbe15c6b-5a41-4f28-b763-4e0b08ad8d9c!
	
	
	==> storage-provisioner [d82e7b5072d4ac8b7f76d20fe92f2c0583ef9a3a658ef715b25e44932cbc19b9] <==
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-629510 -n kubernetes-upgrade-629510
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-629510 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-629510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-629510
--- FAIL: TestKubernetesUpgrade (365.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (274.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-510301 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-510301 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m34.359219641s)

                                                
                                                
-- stdout --
	* [old-k8s-version-510301] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-510301" primary control-plane node in "old-k8s-version-510301" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 19:36:34.274199   62664 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:36:34.274299   62664 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:36:34.274310   62664 out.go:358] Setting ErrFile to fd 2...
	I0924 19:36:34.274314   62664 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:36:34.274543   62664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:36:34.275189   62664 out.go:352] Setting JSON to false
	I0924 19:36:34.276477   62664 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4745,"bootTime":1727201849,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 19:36:34.276594   62664 start.go:139] virtualization: kvm guest
	I0924 19:36:34.278699   62664 out.go:177] * [old-k8s-version-510301] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 19:36:34.280198   62664 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 19:36:34.280216   62664 notify.go:220] Checking for updates...
	I0924 19:36:34.282591   62664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 19:36:34.283763   62664 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:36:34.284997   62664 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:36:34.286251   62664 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 19:36:34.287545   62664 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 19:36:34.289252   62664 config.go:182] Loaded profile config "bridge-038637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:36:34.289391   62664 config.go:182] Loaded profile config "calico-038637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:36:34.289493   62664 config.go:182] Loaded profile config "custom-flannel-038637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:36:34.289599   62664 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 19:36:34.333000   62664 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 19:36:34.334169   62664 start.go:297] selected driver: kvm2
	I0924 19:36:34.334186   62664 start.go:901] validating driver "kvm2" against <nil>
	I0924 19:36:34.334201   62664 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 19:36:34.336791   62664 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:36:34.336954   62664 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 19:36:34.352722   62664 install.go:137] /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 19:36:34.352765   62664 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 19:36:34.352963   62664 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:36:34.352988   62664 cni.go:84] Creating CNI manager for ""
	I0924 19:36:34.353010   62664 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:36:34.353018   62664 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 19:36:34.353056   62664 start.go:340] cluster config:
	{Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:36:34.353155   62664 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:36:34.355008   62664 out.go:177] * Starting "old-k8s-version-510301" primary control-plane node in "old-k8s-version-510301" cluster
	I0924 19:36:34.356093   62664 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:36:34.356121   62664 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 19:36:34.356130   62664 cache.go:56] Caching tarball of preloaded images
	I0924 19:36:34.356209   62664 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 19:36:34.356222   62664 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 19:36:34.356293   62664 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json ...
	I0924 19:36:34.356310   62664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json: {Name:mkdf927324aa8fab3a779211ed6628f8827b91d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:36:34.356449   62664 start.go:360] acquireMachinesLock for old-k8s-version-510301: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:36:35.672558   62664 start.go:364] duration metric: took 1.316070798s to acquireMachinesLock for "old-k8s-version-510301"
	I0924 19:36:35.672654   62664 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:36:35.672753   62664 start.go:125] createHost starting for "" (driver="kvm2")
	I0924 19:36:35.674467   62664 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 19:36:35.674674   62664 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:36:35.674719   62664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:36:35.693049   62664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43685
	I0924 19:36:35.693454   62664 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:36:35.693971   62664 main.go:141] libmachine: Using API Version  1
	I0924 19:36:35.693986   62664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:36:35.694316   62664 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:36:35.694501   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:36:35.694639   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:36:35.694779   62664 start.go:159] libmachine.API.Create for "old-k8s-version-510301" (driver="kvm2")
	I0924 19:36:35.694812   62664 client.go:168] LocalClient.Create starting
	I0924 19:36:35.694874   62664 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem
	I0924 19:36:35.694923   62664 main.go:141] libmachine: Decoding PEM data...
	I0924 19:36:35.694946   62664 main.go:141] libmachine: Parsing certificate...
	I0924 19:36:35.695012   62664 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem
	I0924 19:36:35.695039   62664 main.go:141] libmachine: Decoding PEM data...
	I0924 19:36:35.695054   62664 main.go:141] libmachine: Parsing certificate...
	I0924 19:36:35.695079   62664 main.go:141] libmachine: Running pre-create checks...
	I0924 19:36:35.695091   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .PreCreateCheck
	I0924 19:36:35.695464   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetConfigRaw
	I0924 19:36:35.695829   62664 main.go:141] libmachine: Creating machine...
	I0924 19:36:35.695845   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .Create
	I0924 19:36:35.695972   62664 main.go:141] libmachine: (old-k8s-version-510301) Creating KVM machine...
	I0924 19:36:35.697472   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found existing default KVM network
	I0924 19:36:35.699013   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:35.698745   62713 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b1:77:91} reservation:<nil>}
	I0924 19:36:35.700265   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:35.700162   62713 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ec:9f:ef} reservation:<nil>}
	I0924 19:36:35.701262   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:35.701139   62713 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:bb:03:18} reservation:<nil>}
	I0924 19:36:35.702502   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:35.702419   62713 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028ba90}
	I0924 19:36:35.702668   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | created network xml: 
	I0924 19:36:35.702691   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | <network>
	I0924 19:36:35.702702   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG |   <name>mk-old-k8s-version-510301</name>
	I0924 19:36:35.702709   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG |   <dns enable='no'/>
	I0924 19:36:35.702720   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG |   
	I0924 19:36:35.702730   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0924 19:36:35.702740   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG |     <dhcp>
	I0924 19:36:35.702750   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0924 19:36:35.702762   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG |     </dhcp>
	I0924 19:36:35.702769   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG |   </ip>
	I0924 19:36:35.702806   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG |   
	I0924 19:36:35.702866   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | </network>
	I0924 19:36:35.702882   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | 
	I0924 19:36:35.707746   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | trying to create private KVM network mk-old-k8s-version-510301 192.168.72.0/24...
	I0924 19:36:35.789188   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | private KVM network mk-old-k8s-version-510301 192.168.72.0/24 created
	I0924 19:36:35.789237   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:35.789179   62713 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:36:35.789280   62664 main.go:141] libmachine: (old-k8s-version-510301) Setting up store path in /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301 ...
	I0924 19:36:35.789311   62664 main.go:141] libmachine: (old-k8s-version-510301) Building disk image from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 19:36:35.789375   62664 main.go:141] libmachine: (old-k8s-version-510301) Downloading /home/jenkins/minikube-integration/19700-3751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 19:36:36.053783   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:36.053661   62713 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa...
	I0924 19:36:36.191210   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:36.191074   62713 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/old-k8s-version-510301.rawdisk...
	I0924 19:36:36.191250   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | Writing magic tar header
	I0924 19:36:36.191275   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | Writing SSH key tar header
	I0924 19:36:36.191298   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:36.191230   62713 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301 ...
	I0924 19:36:36.191416   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301
	I0924 19:36:36.191439   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines
	I0924 19:36:36.191457   62664 main.go:141] libmachine: (old-k8s-version-510301) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301 (perms=drwx------)
	I0924 19:36:36.191466   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:36:36.191475   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751
	I0924 19:36:36.191485   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 19:36:36.191497   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | Checking permissions on dir: /home/jenkins
	I0924 19:36:36.191508   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | Checking permissions on dir: /home
	I0924 19:36:36.191525   62664 main.go:141] libmachine: (old-k8s-version-510301) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines (perms=drwxr-xr-x)
	I0924 19:36:36.191533   62664 main.go:141] libmachine: (old-k8s-version-510301) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube (perms=drwxr-xr-x)
	I0924 19:36:36.191538   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | Skipping /home - not owner
	I0924 19:36:36.191548   62664 main.go:141] libmachine: (old-k8s-version-510301) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751 (perms=drwxrwxr-x)
	I0924 19:36:36.191556   62664 main.go:141] libmachine: (old-k8s-version-510301) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 19:36:36.191566   62664 main.go:141] libmachine: (old-k8s-version-510301) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 19:36:36.191576   62664 main.go:141] libmachine: (old-k8s-version-510301) Creating domain...
	I0924 19:36:36.192712   62664 main.go:141] libmachine: (old-k8s-version-510301) define libvirt domain using xml: 
	I0924 19:36:36.192736   62664 main.go:141] libmachine: (old-k8s-version-510301) <domain type='kvm'>
	I0924 19:36:36.192767   62664 main.go:141] libmachine: (old-k8s-version-510301)   <name>old-k8s-version-510301</name>
	I0924 19:36:36.192796   62664 main.go:141] libmachine: (old-k8s-version-510301)   <memory unit='MiB'>2200</memory>
	I0924 19:36:36.192809   62664 main.go:141] libmachine: (old-k8s-version-510301)   <vcpu>2</vcpu>
	I0924 19:36:36.192818   62664 main.go:141] libmachine: (old-k8s-version-510301)   <features>
	I0924 19:36:36.192841   62664 main.go:141] libmachine: (old-k8s-version-510301)     <acpi/>
	I0924 19:36:36.192851   62664 main.go:141] libmachine: (old-k8s-version-510301)     <apic/>
	I0924 19:36:36.192859   62664 main.go:141] libmachine: (old-k8s-version-510301)     <pae/>
	I0924 19:36:36.192867   62664 main.go:141] libmachine: (old-k8s-version-510301)     
	I0924 19:36:36.192877   62664 main.go:141] libmachine: (old-k8s-version-510301)   </features>
	I0924 19:36:36.192885   62664 main.go:141] libmachine: (old-k8s-version-510301)   <cpu mode='host-passthrough'>
	I0924 19:36:36.192896   62664 main.go:141] libmachine: (old-k8s-version-510301)   
	I0924 19:36:36.192903   62664 main.go:141] libmachine: (old-k8s-version-510301)   </cpu>
	I0924 19:36:36.192914   62664 main.go:141] libmachine: (old-k8s-version-510301)   <os>
	I0924 19:36:36.192924   62664 main.go:141] libmachine: (old-k8s-version-510301)     <type>hvm</type>
	I0924 19:36:36.192932   62664 main.go:141] libmachine: (old-k8s-version-510301)     <boot dev='cdrom'/>
	I0924 19:36:36.192944   62664 main.go:141] libmachine: (old-k8s-version-510301)     <boot dev='hd'/>
	I0924 19:36:36.192955   62664 main.go:141] libmachine: (old-k8s-version-510301)     <bootmenu enable='no'/>
	I0924 19:36:36.192961   62664 main.go:141] libmachine: (old-k8s-version-510301)   </os>
	I0924 19:36:36.192977   62664 main.go:141] libmachine: (old-k8s-version-510301)   <devices>
	I0924 19:36:36.192987   62664 main.go:141] libmachine: (old-k8s-version-510301)     <disk type='file' device='cdrom'>
	I0924 19:36:36.193002   62664 main.go:141] libmachine: (old-k8s-version-510301)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/boot2docker.iso'/>
	I0924 19:36:36.193013   62664 main.go:141] libmachine: (old-k8s-version-510301)       <target dev='hdc' bus='scsi'/>
	I0924 19:36:36.193024   62664 main.go:141] libmachine: (old-k8s-version-510301)       <readonly/>
	I0924 19:36:36.193033   62664 main.go:141] libmachine: (old-k8s-version-510301)     </disk>
	I0924 19:36:36.193042   62664 main.go:141] libmachine: (old-k8s-version-510301)     <disk type='file' device='disk'>
	I0924 19:36:36.193052   62664 main.go:141] libmachine: (old-k8s-version-510301)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 19:36:36.193064   62664 main.go:141] libmachine: (old-k8s-version-510301)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/old-k8s-version-510301.rawdisk'/>
	I0924 19:36:36.193074   62664 main.go:141] libmachine: (old-k8s-version-510301)       <target dev='hda' bus='virtio'/>
	I0924 19:36:36.193085   62664 main.go:141] libmachine: (old-k8s-version-510301)     </disk>
	I0924 19:36:36.193093   62664 main.go:141] libmachine: (old-k8s-version-510301)     <interface type='network'>
	I0924 19:36:36.193106   62664 main.go:141] libmachine: (old-k8s-version-510301)       <source network='mk-old-k8s-version-510301'/>
	I0924 19:36:36.193141   62664 main.go:141] libmachine: (old-k8s-version-510301)       <model type='virtio'/>
	I0924 19:36:36.193184   62664 main.go:141] libmachine: (old-k8s-version-510301)     </interface>
	I0924 19:36:36.193196   62664 main.go:141] libmachine: (old-k8s-version-510301)     <interface type='network'>
	I0924 19:36:36.193237   62664 main.go:141] libmachine: (old-k8s-version-510301)       <source network='default'/>
	I0924 19:36:36.193251   62664 main.go:141] libmachine: (old-k8s-version-510301)       <model type='virtio'/>
	I0924 19:36:36.193262   62664 main.go:141] libmachine: (old-k8s-version-510301)     </interface>
	I0924 19:36:36.193271   62664 main.go:141] libmachine: (old-k8s-version-510301)     <serial type='pty'>
	I0924 19:36:36.193280   62664 main.go:141] libmachine: (old-k8s-version-510301)       <target port='0'/>
	I0924 19:36:36.193289   62664 main.go:141] libmachine: (old-k8s-version-510301)     </serial>
	I0924 19:36:36.193297   62664 main.go:141] libmachine: (old-k8s-version-510301)     <console type='pty'>
	I0924 19:36:36.193307   62664 main.go:141] libmachine: (old-k8s-version-510301)       <target type='serial' port='0'/>
	I0924 19:36:36.193315   62664 main.go:141] libmachine: (old-k8s-version-510301)     </console>
	I0924 19:36:36.193325   62664 main.go:141] libmachine: (old-k8s-version-510301)     <rng model='virtio'>
	I0924 19:36:36.193334   62664 main.go:141] libmachine: (old-k8s-version-510301)       <backend model='random'>/dev/random</backend>
	I0924 19:36:36.193344   62664 main.go:141] libmachine: (old-k8s-version-510301)     </rng>
	I0924 19:36:36.193352   62664 main.go:141] libmachine: (old-k8s-version-510301)     
	I0924 19:36:36.193361   62664 main.go:141] libmachine: (old-k8s-version-510301)     
	I0924 19:36:36.193374   62664 main.go:141] libmachine: (old-k8s-version-510301)   </devices>
	I0924 19:36:36.193384   62664 main.go:141] libmachine: (old-k8s-version-510301) </domain>
	I0924 19:36:36.193394   62664 main.go:141] libmachine: (old-k8s-version-510301) 
	I0924 19:36:36.197386   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:44:3b:4c in network default
	I0924 19:36:36.198162   62664 main.go:141] libmachine: (old-k8s-version-510301) Ensuring networks are active...
	I0924 19:36:36.198188   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:36:36.199075   62664 main.go:141] libmachine: (old-k8s-version-510301) Ensuring network default is active
	I0924 19:36:36.199469   62664 main.go:141] libmachine: (old-k8s-version-510301) Ensuring network mk-old-k8s-version-510301 is active
	I0924 19:36:36.200191   62664 main.go:141] libmachine: (old-k8s-version-510301) Getting domain xml...
	I0924 19:36:36.201067   62664 main.go:141] libmachine: (old-k8s-version-510301) Creating domain...
	I0924 19:36:37.551858   62664 main.go:141] libmachine: (old-k8s-version-510301) Waiting to get IP...
	I0924 19:36:37.553029   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:36:37.553626   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:36:37.553655   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:37.553627   62713 retry.go:31] will retry after 299.812936ms: waiting for machine to come up
	I0924 19:36:37.855430   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:36:37.856026   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:36:37.856074   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:37.855990   62713 retry.go:31] will retry after 359.760091ms: waiting for machine to come up
	I0924 19:36:38.217602   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:36:38.218269   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:36:38.218291   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:38.218230   62713 retry.go:31] will retry after 296.677763ms: waiting for machine to come up
	I0924 19:36:38.516906   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:36:38.517448   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:36:38.517480   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:38.517401   62713 retry.go:31] will retry after 497.753844ms: waiting for machine to come up
	I0924 19:36:39.017057   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:36:39.017734   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:36:39.017763   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:39.017653   62713 retry.go:31] will retry after 578.443253ms: waiting for machine to come up
	I0924 19:36:39.597493   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:36:39.598080   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:36:39.598115   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:39.598031   62713 retry.go:31] will retry after 844.703736ms: waiting for machine to come up
	I0924 19:36:40.444081   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:36:40.445256   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:36:40.445291   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:40.445197   62713 retry.go:31] will retry after 820.762581ms: waiting for machine to come up
	I0924 19:36:41.267970   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:36:41.268548   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:36:41.268577   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:41.268496   62713 retry.go:31] will retry after 1.126432933s: waiting for machine to come up
	I0924 19:36:42.397082   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:36:42.397534   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:36:42.397590   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:42.397522   62713 retry.go:31] will retry after 1.272574515s: waiting for machine to come up
	I0924 19:36:43.672082   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:36:43.672561   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:36:43.672594   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:43.672508   62713 retry.go:31] will retry after 2.225667355s: waiting for machine to come up
	I0924 19:36:45.899472   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:36:45.899958   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:36:45.899983   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:45.899935   62713 retry.go:31] will retry after 2.373195555s: waiting for machine to come up
	I0924 19:36:48.276044   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:36:48.276718   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:36:48.276747   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:48.276636   62713 retry.go:31] will retry after 3.548066204s: waiting for machine to come up
	I0924 19:36:51.826156   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:36:51.826680   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:36:51.826701   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:51.826646   62713 retry.go:31] will retry after 4.098574416s: waiting for machine to come up
	I0924 19:36:55.926515   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:36:55.927185   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:36:55.927210   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:36:55.927148   62713 retry.go:31] will retry after 4.562350139s: waiting for machine to come up
	I0924 19:37:00.492020   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:00.492647   62664 main.go:141] libmachine: (old-k8s-version-510301) Found IP for machine: 192.168.72.81
	I0924 19:37:00.492672   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has current primary IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:00.492679   62664 main.go:141] libmachine: (old-k8s-version-510301) Reserving static IP address...
	I0924 19:37:00.493086   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-510301", mac: "52:54:00:72:11:f0", ip: "192.168.72.81"} in network mk-old-k8s-version-510301
	I0924 19:37:00.581420   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | Getting to WaitForSSH function...
	I0924 19:37:00.581449   62664 main.go:141] libmachine: (old-k8s-version-510301) Reserved static IP address: 192.168.72.81
	I0924 19:37:00.581462   62664 main.go:141] libmachine: (old-k8s-version-510301) Waiting for SSH to be available...
	I0924 19:37:00.584944   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:00.585511   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:minikube Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:00.585539   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:00.585723   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using SSH client type: external
	I0924 19:37:00.585755   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa (-rw-------)
	I0924 19:37:00.585786   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:37:00.585799   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | About to run SSH command:
	I0924 19:37:00.585810   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | exit 0
	I0924 19:37:00.715843   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | SSH cmd err, output: <nil>: 
	I0924 19:37:00.716301   62664 main.go:141] libmachine: (old-k8s-version-510301) KVM machine creation complete!
	I0924 19:37:00.716728   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetConfigRaw
	I0924 19:37:00.717306   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:37:00.717558   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:37:00.717746   62664 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 19:37:00.717761   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetState
	I0924 19:37:00.719186   62664 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 19:37:00.719200   62664 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 19:37:00.719206   62664 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 19:37:00.719215   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:37:00.722255   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:00.722812   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:00.722853   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:00.723059   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:37:00.723215   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:37:00.723391   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:37:00.723523   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:37:00.723685   62664 main.go:141] libmachine: Using SSH client type: native
	I0924 19:37:00.723890   62664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:37:00.723904   62664 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 19:37:00.830637   62664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:37:00.830664   62664 main.go:141] libmachine: Detecting the provisioner...
	I0924 19:37:00.830675   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:37:00.833916   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:00.834359   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:00.834387   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:00.834616   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:37:00.834821   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:37:00.835045   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:37:00.835214   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:37:00.835387   62664 main.go:141] libmachine: Using SSH client type: native
	I0924 19:37:00.835624   62664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:37:00.835640   62664 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 19:37:00.936367   62664 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 19:37:00.936442   62664 main.go:141] libmachine: found compatible host: buildroot
	I0924 19:37:00.936452   62664 main.go:141] libmachine: Provisioning with buildroot...
	I0924 19:37:00.936469   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:37:00.936760   62664 buildroot.go:166] provisioning hostname "old-k8s-version-510301"
	I0924 19:37:00.936790   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:37:00.937065   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:37:00.940192   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:00.940714   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:00.940740   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:00.940832   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:37:00.941032   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:37:00.941211   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:37:00.941385   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:37:00.941575   62664 main.go:141] libmachine: Using SSH client type: native
	I0924 19:37:00.941805   62664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:37:00.941820   62664 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-510301 && echo "old-k8s-version-510301" | sudo tee /etc/hostname
	I0924 19:37:01.059156   62664 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-510301
	
	I0924 19:37:01.059189   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:37:01.062381   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:01.062776   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:01.062805   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:01.062974   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:37:01.063176   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:37:01.063348   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:37:01.063524   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:37:01.063694   62664 main.go:141] libmachine: Using SSH client type: native
	I0924 19:37:01.063905   62664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:37:01.063938   62664 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-510301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-510301/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-510301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:37:01.182758   62664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:37:01.182791   62664 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:37:01.182886   62664 buildroot.go:174] setting up certificates
	I0924 19:37:01.182960   62664 provision.go:84] configureAuth start
	I0924 19:37:01.182987   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:37:01.183276   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:37:01.186625   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:01.187062   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:01.187085   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:01.187297   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:37:01.190005   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:01.190349   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:01.190412   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:01.190511   62664 provision.go:143] copyHostCerts
	I0924 19:37:01.190573   62664 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:37:01.190585   62664 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:37:01.190643   62664 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:37:01.190753   62664 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:37:01.190763   62664 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:37:01.190795   62664 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:37:01.190920   62664 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:37:01.190933   62664 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:37:01.190965   62664 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:37:01.191044   62664 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-510301 san=[127.0.0.1 192.168.72.81 localhost minikube old-k8s-version-510301]
	I0924 19:37:01.452291   62664 provision.go:177] copyRemoteCerts
	I0924 19:37:01.452352   62664 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:37:01.452374   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:37:01.455652   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:01.456109   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:01.456140   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:01.456291   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:37:01.456480   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:37:01.456664   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:37:01.456819   62664 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:37:01.552297   62664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:37:01.575723   62664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 19:37:01.604297   62664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:37:01.630443   62664 provision.go:87] duration metric: took 447.463211ms to configureAuth
	I0924 19:37:01.630473   62664 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:37:01.630678   62664 config.go:182] Loaded profile config "old-k8s-version-510301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:37:01.630757   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:37:01.633442   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:01.633959   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:01.633988   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:01.634156   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:37:01.634381   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:37:01.634894   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:37:01.635094   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:37:01.635273   62664 main.go:141] libmachine: Using SSH client type: native
	I0924 19:37:01.635477   62664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:37:01.635537   62664 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:37:01.883315   62664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:37:01.883348   62664 main.go:141] libmachine: Checking connection to Docker...
	I0924 19:37:01.883361   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetURL
	I0924 19:37:01.884841   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using libvirt version 6000000
	I0924 19:37:01.887644   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:01.888041   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:01.888070   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:01.888288   62664 main.go:141] libmachine: Docker is up and running!
	I0924 19:37:01.888301   62664 main.go:141] libmachine: Reticulating splines...
	I0924 19:37:01.888307   62664 client.go:171] duration metric: took 26.19348615s to LocalClient.Create
	I0924 19:37:01.888329   62664 start.go:167] duration metric: took 26.193550591s to libmachine.API.Create "old-k8s-version-510301"
	I0924 19:37:01.888341   62664 start.go:293] postStartSetup for "old-k8s-version-510301" (driver="kvm2")
	I0924 19:37:01.888357   62664 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:37:01.888379   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:37:01.888656   62664 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:37:01.888681   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:37:01.891521   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:01.894147   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:01.894173   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:01.894407   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:37:01.894569   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:37:01.894734   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:37:01.894856   62664 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:37:01.977769   62664 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:37:01.983219   62664 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:37:01.983240   62664 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:37:01.983292   62664 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:37:01.984037   62664 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:37:01.984159   62664 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:37:01.998059   62664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:37:02.028048   62664 start.go:296] duration metric: took 139.691305ms for postStartSetup
	I0924 19:37:02.028097   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetConfigRaw
	I0924 19:37:02.028782   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:37:02.031786   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:02.032248   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:02.032275   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:02.032623   62664 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json ...
	I0924 19:37:02.032844   62664 start.go:128] duration metric: took 26.360076927s to createHost
	I0924 19:37:02.032868   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:37:02.035596   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:02.035980   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:02.036007   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:02.036161   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:37:02.036362   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:37:02.036525   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:37:02.036690   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:37:02.036864   62664 main.go:141] libmachine: Using SSH client type: native
	I0924 19:37:02.037047   62664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:37:02.037064   62664 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:37:02.145079   62664 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727206622.093992659
	
	I0924 19:37:02.145098   62664 fix.go:216] guest clock: 1727206622.093992659
	I0924 19:37:02.145107   62664 fix.go:229] Guest: 2024-09-24 19:37:02.093992659 +0000 UTC Remote: 2024-09-24 19:37:02.03285697 +0000 UTC m=+27.799455964 (delta=61.135689ms)
	I0924 19:37:02.145129   62664 fix.go:200] guest clock delta is within tolerance: 61.135689ms
	I0924 19:37:02.145136   62664 start.go:83] releasing machines lock for "old-k8s-version-510301", held for 26.472530334s
	I0924 19:37:02.145157   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:37:02.145449   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:37:02.148832   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:02.149244   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:02.149268   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:02.149419   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:37:02.149900   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:37:02.150077   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:37:02.150146   62664 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:37:02.150202   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:37:02.150264   62664 ssh_runner.go:195] Run: cat /version.json
	I0924 19:37:02.150289   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:37:02.153049   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:02.153247   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:02.153424   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:02.153447   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:02.153583   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:37:02.153703   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:02.153721   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:02.153778   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:37:02.153933   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:37:02.154009   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:37:02.154089   62664 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:37:02.154462   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:37:02.154637   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:37:02.154769   62664 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:37:02.234673   62664 ssh_runner.go:195] Run: systemctl --version
	I0924 19:37:02.259089   62664 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:37:02.428057   62664 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:37:02.434400   62664 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:37:02.434454   62664 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:37:02.454540   62664 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:37:02.454560   62664 start.go:495] detecting cgroup driver to use...
	I0924 19:37:02.454628   62664 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:37:02.472914   62664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:37:02.486632   62664 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:37:02.486705   62664 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:37:02.499735   62664 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:37:02.514873   62664 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:37:02.676264   62664 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:37:02.872378   62664 docker.go:233] disabling docker service ...
	I0924 19:37:02.872454   62664 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:37:02.890856   62664 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:37:02.904255   62664 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:37:03.068912   62664 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:37:03.205715   62664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:37:03.220287   62664 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:37:03.243162   62664 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 19:37:03.243221   62664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:37:03.261040   62664 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:37:03.261113   62664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:37:03.272937   62664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:37:03.287397   62664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:37:03.299718   62664 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:37:03.311197   62664 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:37:03.320972   62664 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:37:03.321026   62664 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:37:03.338288   62664 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:37:03.350706   62664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:37:03.491579   62664 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:37:03.593463   62664 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:37:03.593546   62664 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:37:03.599072   62664 start.go:563] Will wait 60s for crictl version
	I0924 19:37:03.599133   62664 ssh_runner.go:195] Run: which crictl
	I0924 19:37:03.604043   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:37:03.664509   62664 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:37:03.664599   62664 ssh_runner.go:195] Run: crio --version
	I0924 19:37:03.708083   62664 ssh_runner.go:195] Run: crio --version
	I0924 19:37:03.756303   62664 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 19:37:03.757975   62664 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:37:03.762020   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:03.762387   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:37:03.762412   62664 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:37:03.762658   62664 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0924 19:37:03.770475   62664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:37:03.785156   62664 kubeadm.go:883] updating cluster {Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:37:03.785287   62664 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:37:03.785349   62664 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:37:03.821553   62664 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:37:03.821626   62664 ssh_runner.go:195] Run: which lz4
	I0924 19:37:03.825721   62664 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:37:03.830963   62664 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:37:03.830994   62664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 19:37:05.418496   62664 crio.go:462] duration metric: took 1.592807826s to copy over tarball
	I0924 19:37:05.418564   62664 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:37:08.383355   62664 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.964770418s)
	I0924 19:37:08.383415   62664 crio.go:469] duration metric: took 2.9648553s to extract the tarball
	I0924 19:37:08.383428   62664 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:37:08.431129   62664 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:37:08.519053   62664 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:37:08.519078   62664 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 19:37:08.519132   62664 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:37:08.519392   62664 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 19:37:08.519432   62664 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:37:08.519515   62664 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:37:08.519620   62664 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:37:08.519391   62664 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:37:08.519774   62664 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:37:08.519621   62664 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 19:37:08.520800   62664 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:37:08.521103   62664 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:37:08.521151   62664 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 19:37:08.521300   62664 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:37:08.521404   62664 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 19:37:08.521593   62664 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:37:08.522052   62664 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:37:08.522972   62664 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:37:08.662964   62664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:37:08.669487   62664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 19:37:08.675386   62664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 19:37:08.683666   62664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:37:08.695884   62664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 19:37:08.696676   62664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:37:08.753375   62664 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 19:37:08.753418   62664 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:37:08.753456   62664 ssh_runner.go:195] Run: which crictl
	I0924 19:37:08.755361   62664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:37:08.816297   62664 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 19:37:08.816349   62664 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:37:08.816397   62664 ssh_runner.go:195] Run: which crictl
	I0924 19:37:08.821741   62664 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 19:37:08.821782   62664 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 19:37:08.821839   62664 ssh_runner.go:195] Run: which crictl
	I0924 19:37:08.823009   62664 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 19:37:08.823043   62664 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:37:08.823082   62664 ssh_runner.go:195] Run: which crictl
	I0924 19:37:08.851830   62664 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 19:37:08.851856   62664 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 19:37:08.851874   62664 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:37:08.851891   62664 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 19:37:08.851915   62664 ssh_runner.go:195] Run: which crictl
	I0924 19:37:08.851927   62664 ssh_runner.go:195] Run: which crictl
	I0924 19:37:08.851933   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:37:08.857926   62664 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 19:37:08.857964   62664 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:37:08.857970   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:37:08.857987   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:37:08.857999   62664 ssh_runner.go:195] Run: which crictl
	I0924 19:37:08.858039   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:37:08.862350   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:37:08.951764   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:37:08.951775   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:37:08.954124   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:37:08.982320   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:37:08.982388   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:37:08.982481   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:37:08.982517   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:37:09.081321   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:37:09.083124   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:37:09.092217   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:37:09.150538   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:37:09.163109   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:37:09.163265   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:37:09.163365   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:37:09.198167   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:37:09.219237   62664 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 19:37:09.232123   62664 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 19:37:09.311263   62664 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 19:37:09.311310   62664 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 19:37:09.313801   62664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:37:09.313803   62664 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 19:37:09.313865   62664 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 19:37:09.346124   62664 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 19:37:09.562013   62664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:37:09.700468   62664 cache_images.go:92] duration metric: took 1.18137326s to LoadCachedImages
	W0924 19:37:09.700574   62664 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0924 19:37:09.700590   62664 kubeadm.go:934] updating node { 192.168.72.81 8443 v1.20.0 crio true true} ...
	I0924 19:37:09.700692   62664 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-510301 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:37:09.700768   62664 ssh_runner.go:195] Run: crio config
	I0924 19:37:09.749310   62664 cni.go:84] Creating CNI manager for ""
	I0924 19:37:09.749331   62664 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:37:09.749341   62664 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:37:09.749359   62664 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-510301 NodeName:old-k8s-version-510301 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 19:37:09.749478   62664 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-510301"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:37:09.749536   62664 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 19:37:09.759879   62664 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:37:09.759950   62664 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:37:09.769291   62664 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0924 19:37:09.786761   62664 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:37:09.802761   62664 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0924 19:37:09.820855   62664 ssh_runner.go:195] Run: grep 192.168.72.81	control-plane.minikube.internal$ /etc/hosts
	I0924 19:37:09.826167   62664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:37:09.838556   62664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:37:09.969818   62664 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:37:09.986978   62664 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301 for IP: 192.168.72.81
	I0924 19:37:09.986996   62664 certs.go:194] generating shared ca certs ...
	I0924 19:37:09.987012   62664 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:37:09.987142   62664 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:37:09.987180   62664 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:37:09.987187   62664 certs.go:256] generating profile certs ...
	I0924 19:37:09.987235   62664 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/client.key
	I0924 19:37:09.987248   62664 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/client.crt with IP's: []
	I0924 19:37:10.080917   62664 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/client.crt ...
	I0924 19:37:10.080944   62664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/client.crt: {Name:mk3579af0b254e09c8f77a596c5759850666b081 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:37:10.081100   62664 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/client.key ...
	I0924 19:37:10.081113   62664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/client.key: {Name:mk346af4c69506cd487f32da434f85424217d71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:37:10.081191   62664 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key.32de9897
	I0924 19:37:10.081206   62664 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.crt.32de9897 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.81]
	I0924 19:37:10.341074   62664 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.crt.32de9897 ...
	I0924 19:37:10.341103   62664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.crt.32de9897: {Name:mkfb643259d6a2af5687de358ec2648a5db68174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:37:10.341256   62664 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key.32de9897 ...
	I0924 19:37:10.341268   62664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key.32de9897: {Name:mkd147191861e6583278e999a3f1f4d437b4271d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:37:10.341338   62664 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.crt.32de9897 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.crt
	I0924 19:37:10.341409   62664 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key.32de9897 -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key
	I0924 19:37:10.341464   62664 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key
	I0924 19:37:10.341479   62664 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.crt with IP's: []
	I0924 19:37:10.476543   62664 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.crt ...
	I0924 19:37:10.476575   62664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.crt: {Name:mk9efa20e4512a7c6c7dcac41d6430de46a5fa86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:37:10.479905   62664 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key ...
	I0924 19:37:10.479934   62664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key: {Name:mkb32bc1228b09aa3cf3118bc4d40394ea93b955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:37:10.480213   62664 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:37:10.480260   62664 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:37:10.480274   62664 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:37:10.480303   62664 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:37:10.480335   62664 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:37:10.480365   62664 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:37:10.480415   62664 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:37:10.481273   62664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:37:10.507969   62664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:37:10.532499   62664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:37:10.557196   62664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:37:10.586010   62664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 19:37:10.620154   62664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 19:37:10.646747   62664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:37:10.676450   62664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:37:10.707573   62664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:37:10.733888   62664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:37:10.759095   62664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:37:10.784490   62664 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:37:10.805338   62664 ssh_runner.go:195] Run: openssl version
	I0924 19:37:10.811805   62664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:37:10.833058   62664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:37:10.849568   62664 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:37:10.849617   62664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:37:10.857482   62664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:37:10.872917   62664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:37:10.890201   62664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:37:10.897869   62664 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:37:10.897928   62664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:37:10.904979   62664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:37:10.922873   62664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:37:10.938425   62664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:37:10.947344   62664 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:37:10.947410   62664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:37:10.953837   62664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:37:10.964786   62664 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:37:10.968890   62664 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 19:37:10.968946   62664 kubeadm.go:392] StartCluster: {Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:37:10.969022   62664 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:37:10.969071   62664 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:37:11.013878   62664 cri.go:89] found id: ""
	I0924 19:37:11.013952   62664 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:37:11.026890   62664 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:37:11.038605   62664 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:37:11.050095   62664 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:37:11.050116   62664 kubeadm.go:157] found existing configuration files:
	
	I0924 19:37:11.050163   62664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:37:11.061106   62664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:37:11.061165   62664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:37:11.073274   62664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:37:11.084898   62664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:37:11.084943   62664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:37:11.096752   62664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:37:11.109203   62664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:37:11.109250   62664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:37:11.120754   62664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:37:11.130956   62664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:37:11.131000   62664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:37:11.142735   62664 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:37:11.297236   62664 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:37:11.297315   62664 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:37:11.482186   62664 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:37:11.482288   62664 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:37:11.482371   62664 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:37:11.701107   62664 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:37:11.703815   62664 out.go:235]   - Generating certificates and keys ...
	I0924 19:37:11.703928   62664 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:37:11.704037   62664 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:37:12.037814   62664 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 19:37:12.146799   62664 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 19:37:12.295015   62664 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 19:37:13.264799   62664 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 19:37:13.490667   62664 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 19:37:13.490987   62664 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-510301] and IPs [192.168.72.81 127.0.0.1 ::1]
	I0924 19:37:13.854305   62664 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 19:37:13.854481   62664 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-510301] and IPs [192.168.72.81 127.0.0.1 ::1]
	I0924 19:37:14.107207   62664 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 19:37:14.382737   62664 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 19:37:14.738976   62664 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 19:37:14.739059   62664 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:37:15.184525   62664 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:37:15.316940   62664 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:37:15.672408   62664 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:37:16.133438   62664 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:37:16.159136   62664 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:37:16.159294   62664 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:37:16.159359   62664 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:37:16.347512   62664 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:37:16.544706   62664 out.go:235]   - Booting up control plane ...
	I0924 19:37:16.544856   62664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:37:16.544962   62664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:37:16.545056   62664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:37:16.545158   62664 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:37:16.545371   62664 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:37:56.326578   62664 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:37:56.326712   62664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:37:56.326983   62664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:38:01.326338   62664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:38:01.326563   62664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:38:11.325653   62664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:38:11.325945   62664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:38:31.325556   62664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:38:31.325816   62664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:39:11.327312   62664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:39:11.327714   62664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:39:11.327748   62664 kubeadm.go:310] 
	I0924 19:39:11.327815   62664 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:39:11.327868   62664 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:39:11.327878   62664 kubeadm.go:310] 
	I0924 19:39:11.327920   62664 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:39:11.327970   62664 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:39:11.328142   62664 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:39:11.328155   62664 kubeadm.go:310] 
	I0924 19:39:11.328290   62664 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:39:11.328341   62664 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:39:11.328389   62664 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:39:11.328402   62664 kubeadm.go:310] 
	I0924 19:39:11.328545   62664 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:39:11.328665   62664 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:39:11.328676   62664 kubeadm.go:310] 
	I0924 19:39:11.328761   62664 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:39:11.328889   62664 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:39:11.328989   62664 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:39:11.329104   62664 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:39:11.329147   62664 kubeadm.go:310] 
	I0924 19:39:11.329310   62664 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:39:11.329426   62664 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:39:11.329599   62664 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 19:39:11.329680   62664 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-510301] and IPs [192.168.72.81 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-510301] and IPs [192.168.72.81 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-510301] and IPs [192.168.72.81 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-510301] and IPs [192.168.72.81 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 19:39:11.329733   62664 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:39:11.774376   62664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:39:11.787871   62664 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:39:11.796263   62664 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:39:11.796279   62664 kubeadm.go:157] found existing configuration files:
	
	I0924 19:39:11.796320   62664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:39:11.804127   62664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:39:11.804168   62664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:39:11.812048   62664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:39:11.819700   62664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:39:11.819742   62664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:39:11.827970   62664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:39:11.835852   62664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:39:11.835898   62664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:39:11.844328   62664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:39:11.852507   62664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:39:11.852582   62664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:39:11.860977   62664 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:39:11.922914   62664 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:39:11.922989   62664 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:39:12.047881   62664 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:39:12.047997   62664 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:39:12.048110   62664 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:39:12.201672   62664 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:39:12.204268   62664 out.go:235]   - Generating certificates and keys ...
	I0924 19:39:12.204375   62664 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:39:12.204476   62664 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:39:12.204588   62664 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:39:12.204693   62664 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:39:12.204777   62664 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:39:12.204839   62664 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:39:12.204912   62664 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:39:12.204995   62664 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:39:12.205120   62664 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:39:12.205233   62664 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:39:12.205299   62664 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:39:12.205385   62664 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:39:12.346234   62664 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:39:12.417104   62664 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:39:12.718065   62664 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:39:12.873523   62664 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:39:12.891292   62664 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:39:12.892349   62664 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:39:12.892420   62664 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:39:13.008215   62664 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:39:13.010973   62664 out.go:235]   - Booting up control plane ...
	I0924 19:39:13.011106   62664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:39:13.018143   62664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:39:13.019102   62664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:39:13.019801   62664 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:39:13.022199   62664 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:39:53.024564   62664 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:39:53.024686   62664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:39:53.024991   62664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:39:58.025613   62664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:39:58.025859   62664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:40:08.026485   62664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:40:08.026708   62664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:40:28.028264   62664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:40:28.028451   62664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:41:08.030072   62664 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:41:08.030284   62664 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:41:08.030316   62664 kubeadm.go:310] 
	I0924 19:41:08.030391   62664 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:41:08.030468   62664 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:41:08.030486   62664 kubeadm.go:310] 
	I0924 19:41:08.030541   62664 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:41:08.030584   62664 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:41:08.030726   62664 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:41:08.030738   62664 kubeadm.go:310] 
	I0924 19:41:08.030900   62664 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:41:08.030966   62664 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:41:08.031024   62664 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:41:08.031038   62664 kubeadm.go:310] 
	I0924 19:41:08.031164   62664 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:41:08.031275   62664 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:41:08.031295   62664 kubeadm.go:310] 
	I0924 19:41:08.031430   62664 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:41:08.031546   62664 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:41:08.031669   62664 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:41:08.031771   62664 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:41:08.031782   62664 kubeadm.go:310] 
	I0924 19:41:08.032170   62664 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:41:08.032272   62664 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:41:08.032368   62664 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 19:41:08.032438   62664 kubeadm.go:394] duration metric: took 3m57.063494409s to StartCluster
	I0924 19:41:08.032488   62664 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:41:08.032546   62664 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:41:08.064676   62664 cri.go:89] found id: ""
	I0924 19:41:08.064710   62664 logs.go:276] 0 containers: []
	W0924 19:41:08.064722   62664 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:41:08.064729   62664 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:41:08.064781   62664 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:41:08.094718   62664 cri.go:89] found id: ""
	I0924 19:41:08.094741   62664 logs.go:276] 0 containers: []
	W0924 19:41:08.094749   62664 logs.go:278] No container was found matching "etcd"
	I0924 19:41:08.094755   62664 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:41:08.094803   62664 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:41:08.125033   62664 cri.go:89] found id: ""
	I0924 19:41:08.125065   62664 logs.go:276] 0 containers: []
	W0924 19:41:08.125076   62664 logs.go:278] No container was found matching "coredns"
	I0924 19:41:08.125084   62664 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:41:08.125136   62664 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:41:08.162277   62664 cri.go:89] found id: ""
	I0924 19:41:08.162305   62664 logs.go:276] 0 containers: []
	W0924 19:41:08.162316   62664 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:41:08.162323   62664 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:41:08.162385   62664 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:41:08.193375   62664 cri.go:89] found id: ""
	I0924 19:41:08.193400   62664 logs.go:276] 0 containers: []
	W0924 19:41:08.193408   62664 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:41:08.193418   62664 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:41:08.193468   62664 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:41:08.225540   62664 cri.go:89] found id: ""
	I0924 19:41:08.225566   62664 logs.go:276] 0 containers: []
	W0924 19:41:08.225574   62664 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:41:08.225585   62664 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:41:08.225641   62664 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:41:08.264333   62664 cri.go:89] found id: ""
	I0924 19:41:08.264362   62664 logs.go:276] 0 containers: []
	W0924 19:41:08.264373   62664 logs.go:278] No container was found matching "kindnet"
	I0924 19:41:08.264384   62664 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:41:08.264398   62664 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:41:08.379170   62664 logs.go:123] Gathering logs for container status ...
	I0924 19:41:08.379212   62664 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:41:08.414820   62664 logs.go:123] Gathering logs for kubelet ...
	I0924 19:41:08.414869   62664 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:41:08.462309   62664 logs.go:123] Gathering logs for dmesg ...
	I0924 19:41:08.462345   62664 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:41:08.475641   62664 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:41:08.475668   62664 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:41:08.576521   62664 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0924 19:41:08.576551   62664 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 19:41:08.576599   62664 out.go:270] * 
	* 
	W0924 19:41:08.576656   62664 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:41:08.576673   62664 out.go:270] * 
	* 
	W0924 19:41:08.577544   62664 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 19:41:08.580411   62664 out.go:201] 
	W0924 19:41:08.581936   62664 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:41:08.581993   62664 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 19:41:08.582020   62664 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 19:41:08.583604   62664 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-510301 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510301 -n old-k8s-version-510301
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510301 -n old-k8s-version-510301: exit status 6 (220.098544ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:41:08.846352   69150 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-510301" does not appear in /home/jenkins/minikube-integration/19700-3751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-510301" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (274.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-311319 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-311319 --alsologtostderr -v=3: exit status 82 (2m0.485265984s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-311319"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 19:38:49.820649   68340 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:38:49.821056   68340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:38:49.821113   68340 out.go:358] Setting ErrFile to fd 2...
	I0924 19:38:49.821127   68340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:38:49.822301   68340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:38:49.822609   68340 out.go:352] Setting JSON to false
	I0924 19:38:49.822718   68340 mustload.go:65] Loading cluster: embed-certs-311319
	I0924 19:38:49.823257   68340 config.go:182] Loaded profile config "embed-certs-311319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:38:49.823363   68340 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/config.json ...
	I0924 19:38:49.823593   68340 mustload.go:65] Loading cluster: embed-certs-311319
	I0924 19:38:49.823745   68340 config.go:182] Loaded profile config "embed-certs-311319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:38:49.823787   68340 stop.go:39] StopHost: embed-certs-311319
	I0924 19:38:49.824318   68340 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:38:49.824378   68340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:38:49.839198   68340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38271
	I0924 19:38:49.839696   68340 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:38:49.840257   68340 main.go:141] libmachine: Using API Version  1
	I0924 19:38:49.840279   68340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:38:49.840613   68340 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:38:49.843233   68340 out.go:177] * Stopping node "embed-certs-311319"  ...
	I0924 19:38:49.844617   68340 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0924 19:38:49.844660   68340 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:38:49.844917   68340 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0924 19:38:49.844939   68340 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:38:49.848108   68340 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:38:49.848517   68340 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:37:53 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:38:49.848545   68340 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:38:49.848685   68340 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:38:49.848857   68340 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:38:49.849022   68340 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:38:49.849211   68340 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:38:49.941016   68340 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0924 19:38:50.003534   68340 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0924 19:38:50.063599   68340 main.go:141] libmachine: Stopping "embed-certs-311319"...
	I0924 19:38:50.063658   68340 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:38:50.065423   68340 main.go:141] libmachine: (embed-certs-311319) Calling .Stop
	I0924 19:38:50.069399   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 0/120
	I0924 19:38:51.070988   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 1/120
	I0924 19:38:52.073178   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 2/120
	I0924 19:38:53.074511   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 3/120
	I0924 19:38:54.075937   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 4/120
	I0924 19:38:55.078004   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 5/120
	I0924 19:38:56.079970   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 6/120
	I0924 19:38:57.081508   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 7/120
	I0924 19:38:58.083002   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 8/120
	I0924 19:38:59.084471   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 9/120
	I0924 19:39:00.086917   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 10/120
	I0924 19:39:01.088441   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 11/120
	I0924 19:39:02.089883   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 12/120
	I0924 19:39:03.091268   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 13/120
	I0924 19:39:04.093141   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 14/120
	I0924 19:39:05.095037   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 15/120
	I0924 19:39:06.097358   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 16/120
	I0924 19:39:07.098583   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 17/120
	I0924 19:39:08.100027   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 18/120
	I0924 19:39:09.101391   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 19/120
	I0924 19:39:10.103645   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 20/120
	I0924 19:39:11.105005   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 21/120
	I0924 19:39:12.107232   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 22/120
	I0924 19:39:13.108789   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 23/120
	I0924 19:39:14.110445   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 24/120
	I0924 19:39:15.111904   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 25/120
	I0924 19:39:16.113388   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 26/120
	I0924 19:39:17.114690   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 27/120
	I0924 19:39:18.116192   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 28/120
	I0924 19:39:19.118080   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 29/120
	I0924 19:39:20.120332   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 30/120
	I0924 19:39:21.121694   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 31/120
	I0924 19:39:22.122931   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 32/120
	I0924 19:39:23.124112   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 33/120
	I0924 19:39:24.125529   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 34/120
	I0924 19:39:25.127523   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 35/120
	I0924 19:39:26.128992   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 36/120
	I0924 19:39:27.130442   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 37/120
	I0924 19:39:28.131988   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 38/120
	I0924 19:39:29.133454   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 39/120
	I0924 19:39:30.135272   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 40/120
	I0924 19:39:31.137337   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 41/120
	I0924 19:39:32.138640   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 42/120
	I0924 19:39:33.140052   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 43/120
	I0924 19:39:34.141434   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 44/120
	I0924 19:39:35.143529   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 45/120
	I0924 19:39:36.145062   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 46/120
	I0924 19:39:37.146355   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 47/120
	I0924 19:39:38.148477   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 48/120
	I0924 19:39:39.149826   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 49/120
	I0924 19:39:40.151992   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 50/120
	I0924 19:39:41.153311   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 51/120
	I0924 19:39:42.155149   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 52/120
	I0924 19:39:43.156509   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 53/120
	I0924 19:39:44.157960   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 54/120
	I0924 19:39:45.159963   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 55/120
	I0924 19:39:46.161231   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 56/120
	I0924 19:39:47.162589   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 57/120
	I0924 19:39:48.163875   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 58/120
	I0924 19:39:49.165193   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 59/120
	I0924 19:39:50.167378   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 60/120
	I0924 19:39:51.168571   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 61/120
	I0924 19:39:52.169876   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 62/120
	I0924 19:39:53.171171   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 63/120
	I0924 19:39:54.172562   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 64/120
	I0924 19:39:55.174423   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 65/120
	I0924 19:39:56.175715   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 66/120
	I0924 19:39:57.177174   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 67/120
	I0924 19:39:58.178651   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 68/120
	I0924 19:39:59.180173   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 69/120
	I0924 19:40:00.182508   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 70/120
	I0924 19:40:01.183859   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 71/120
	I0924 19:40:02.185298   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 72/120
	I0924 19:40:03.186791   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 73/120
	I0924 19:40:04.188150   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 74/120
	I0924 19:40:05.190137   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 75/120
	I0924 19:40:06.191528   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 76/120
	I0924 19:40:07.192845   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 77/120
	I0924 19:40:08.194091   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 78/120
	I0924 19:40:09.195719   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 79/120
	I0924 19:40:10.197173   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 80/120
	I0924 19:40:11.198517   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 81/120
	I0924 19:40:12.199829   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 82/120
	I0924 19:40:13.201284   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 83/120
	I0924 19:40:14.202780   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 84/120
	I0924 19:40:15.204766   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 85/120
	I0924 19:40:16.206233   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 86/120
	I0924 19:40:17.207757   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 87/120
	I0924 19:40:18.209442   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 88/120
	I0924 19:40:19.210753   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 89/120
	I0924 19:40:20.212939   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 90/120
	I0924 19:40:21.214285   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 91/120
	I0924 19:40:22.215617   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 92/120
	I0924 19:40:23.216946   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 93/120
	I0924 19:40:24.218382   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 94/120
	I0924 19:40:25.220500   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 95/120
	I0924 19:40:26.221887   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 96/120
	I0924 19:40:27.223260   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 97/120
	I0924 19:40:28.224445   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 98/120
	I0924 19:40:29.225884   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 99/120
	I0924 19:40:30.228052   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 100/120
	I0924 19:40:31.229317   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 101/120
	I0924 19:40:32.230644   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 102/120
	I0924 19:40:33.231837   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 103/120
	I0924 19:40:34.233122   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 104/120
	I0924 19:40:35.235001   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 105/120
	I0924 19:40:36.236593   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 106/120
	I0924 19:40:37.238009   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 107/120
	I0924 19:40:38.239305   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 108/120
	I0924 19:40:39.240601   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 109/120
	I0924 19:40:40.242794   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 110/120
	I0924 19:40:41.244173   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 111/120
	I0924 19:40:42.245389   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 112/120
	I0924 19:40:43.246710   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 113/120
	I0924 19:40:44.248046   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 114/120
	I0924 19:40:45.250138   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 115/120
	I0924 19:40:46.251458   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 116/120
	I0924 19:40:47.252971   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 117/120
	I0924 19:40:48.254314   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 118/120
	I0924 19:40:49.255783   68340 main.go:141] libmachine: (embed-certs-311319) Waiting for machine to stop 119/120
	I0924 19:40:50.256689   68340 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0924 19:40:50.256754   68340 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0924 19:40:50.258865   68340 out.go:201] 
	W0924 19:40:50.260268   68340 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0924 19:40:50.260285   68340 out.go:270] * 
	* 
	W0924 19:40:50.262814   68340 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 19:40:50.264308   68340 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-311319 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-311319 -n embed-certs-311319
E0924 19:40:54.512344   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:00.187038   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-311319 -n embed-certs-311319: exit status 3 (18.425982331s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:41:08.691101   69041 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.21:22: connect: no route to host
	E0924 19:41:08.691117   69041 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.21:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-311319" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-965745 --alsologtostderr -v=3
E0924 19:39:07.085574   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:07.091944   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:07.103870   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:07.125248   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:07.166729   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:07.248332   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:07.409886   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:07.731955   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:08.374012   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:09.656072   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:12.217639   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:17.339394   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:27.581486   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-965745 --alsologtostderr -v=3: exit status 82 (2m0.485749129s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-965745"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 19:39:06.171916   68524 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:39:06.172028   68524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:39:06.172038   68524 out.go:358] Setting ErrFile to fd 2...
	I0924 19:39:06.172042   68524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:39:06.172229   68524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:39:06.172458   68524 out.go:352] Setting JSON to false
	I0924 19:39:06.172561   68524 mustload.go:65] Loading cluster: no-preload-965745
	I0924 19:39:06.172916   68524 config.go:182] Loaded profile config "no-preload-965745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:39:06.173000   68524 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/config.json ...
	I0924 19:39:06.173176   68524 mustload.go:65] Loading cluster: no-preload-965745
	I0924 19:39:06.173294   68524 config.go:182] Loaded profile config "no-preload-965745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:39:06.173324   68524 stop.go:39] StopHost: no-preload-965745
	I0924 19:39:06.173759   68524 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:39:06.173807   68524 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:39:06.188573   68524 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36015
	I0924 19:39:06.189122   68524 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:39:06.189677   68524 main.go:141] libmachine: Using API Version  1
	I0924 19:39:06.189720   68524 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:39:06.190010   68524 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:39:06.192200   68524 out.go:177] * Stopping node "no-preload-965745"  ...
	I0924 19:39:06.193496   68524 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0924 19:39:06.193522   68524 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:39:06.193782   68524 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0924 19:39:06.193815   68524 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:39:06.196892   68524 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:39:06.197340   68524 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:37:29 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:39:06.197384   68524 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:39:06.197583   68524 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:39:06.197770   68524 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:39:06.197945   68524 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:39:06.198086   68524 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:39:06.285101   68524 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0924 19:39:06.342392   68524 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0924 19:39:06.410515   68524 main.go:141] libmachine: Stopping "no-preload-965745"...
	I0924 19:39:06.410558   68524 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:39:06.412135   68524 main.go:141] libmachine: (no-preload-965745) Calling .Stop
	I0924 19:39:06.415597   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 0/120
	I0924 19:39:07.417163   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 1/120
	I0924 19:39:08.418514   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 2/120
	I0924 19:39:09.419874   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 3/120
	I0924 19:39:10.421490   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 4/120
	I0924 19:39:11.423422   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 5/120
	I0924 19:39:12.425268   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 6/120
	I0924 19:39:13.426532   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 7/120
	I0924 19:39:14.428050   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 8/120
	I0924 19:39:15.429386   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 9/120
	I0924 19:39:16.431543   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 10/120
	I0924 19:39:17.433469   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 11/120
	I0924 19:39:18.434760   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 12/120
	I0924 19:39:19.436162   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 13/120
	I0924 19:39:20.437431   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 14/120
	I0924 19:39:21.439292   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 15/120
	I0924 19:39:22.440567   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 16/120
	I0924 19:39:23.441843   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 17/120
	I0924 19:39:24.443231   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 18/120
	I0924 19:39:25.445581   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 19/120
	I0924 19:39:26.447867   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 20/120
	I0924 19:39:27.449410   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 21/120
	I0924 19:39:28.450732   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 22/120
	I0924 19:39:29.451944   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 23/120
	I0924 19:39:30.453345   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 24/120
	I0924 19:39:31.455652   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 25/120
	I0924 19:39:32.456975   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 26/120
	I0924 19:39:33.458439   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 27/120
	I0924 19:39:34.459769   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 28/120
	I0924 19:39:35.461259   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 29/120
	I0924 19:39:36.463388   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 30/120
	I0924 19:39:37.464786   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 31/120
	I0924 19:39:38.466261   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 32/120
	I0924 19:39:39.467782   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 33/120
	I0924 19:39:40.469489   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 34/120
	I0924 19:39:41.471555   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 35/120
	I0924 19:39:42.473192   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 36/120
	I0924 19:39:43.474681   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 37/120
	I0924 19:39:44.476152   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 38/120
	I0924 19:39:45.477599   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 39/120
	I0924 19:39:46.479359   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 40/120
	I0924 19:39:47.481450   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 41/120
	I0924 19:39:48.482930   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 42/120
	I0924 19:39:49.484441   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 43/120
	I0924 19:39:50.486097   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 44/120
	I0924 19:39:51.487965   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 45/120
	I0924 19:39:52.489351   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 46/120
	I0924 19:39:53.490823   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 47/120
	I0924 19:39:54.492269   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 48/120
	I0924 19:39:55.493875   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 49/120
	I0924 19:39:56.496163   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 50/120
	I0924 19:39:57.497678   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 51/120
	I0924 19:39:58.499093   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 52/120
	I0924 19:39:59.500768   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 53/120
	I0924 19:40:00.502312   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 54/120
	I0924 19:40:01.504382   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 55/120
	I0924 19:40:02.505920   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 56/120
	I0924 19:40:03.507443   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 57/120
	I0924 19:40:04.509220   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 58/120
	I0924 19:40:05.510776   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 59/120
	I0924 19:40:06.513084   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 60/120
	I0924 19:40:07.514583   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 61/120
	I0924 19:40:08.516355   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 62/120
	I0924 19:40:09.517742   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 63/120
	I0924 19:40:10.519236   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 64/120
	I0924 19:40:11.521332   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 65/120
	I0924 19:40:12.523318   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 66/120
	I0924 19:40:13.524953   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 67/120
	I0924 19:40:14.526581   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 68/120
	I0924 19:40:15.528367   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 69/120
	I0924 19:40:16.530928   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 70/120
	I0924 19:40:17.532357   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 71/120
	I0924 19:40:18.534108   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 72/120
	I0924 19:40:19.535471   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 73/120
	I0924 19:40:20.537074   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 74/120
	I0924 19:40:21.539210   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 75/120
	I0924 19:40:22.540460   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 76/120
	I0924 19:40:23.541702   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 77/120
	I0924 19:40:24.543051   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 78/120
	I0924 19:40:25.544562   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 79/120
	I0924 19:40:26.546784   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 80/120
	I0924 19:40:27.549051   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 81/120
	I0924 19:40:28.550306   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 82/120
	I0924 19:40:29.551694   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 83/120
	I0924 19:40:30.552995   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 84/120
	I0924 19:40:31.555184   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 85/120
	I0924 19:40:32.556434   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 86/120
	I0924 19:40:33.558021   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 87/120
	I0924 19:40:34.559416   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 88/120
	I0924 19:40:35.560717   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 89/120
	I0924 19:40:36.562961   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 90/120
	I0924 19:40:37.564456   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 91/120
	I0924 19:40:38.565727   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 92/120
	I0924 19:40:39.567365   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 93/120
	I0924 19:40:40.568777   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 94/120
	I0924 19:40:41.570655   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 95/120
	I0924 19:40:42.572164   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 96/120
	I0924 19:40:43.573535   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 97/120
	I0924 19:40:44.575100   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 98/120
	I0924 19:40:45.576540   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 99/120
	I0924 19:40:46.578710   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 100/120
	I0924 19:40:47.580013   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 101/120
	I0924 19:40:48.581510   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 102/120
	I0924 19:40:49.582892   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 103/120
	I0924 19:40:50.584306   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 104/120
	I0924 19:40:51.586283   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 105/120
	I0924 19:40:52.587693   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 106/120
	I0924 19:40:53.589141   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 107/120
	I0924 19:40:54.590588   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 108/120
	I0924 19:40:55.592245   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 109/120
	I0924 19:40:56.593710   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 110/120
	I0924 19:40:57.595218   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 111/120
	I0924 19:40:58.596722   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 112/120
	I0924 19:40:59.598331   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 113/120
	I0924 19:41:00.599510   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 114/120
	I0924 19:41:01.601582   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 115/120
	I0924 19:41:02.603216   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 116/120
	I0924 19:41:03.604475   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 117/120
	I0924 19:41:04.605942   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 118/120
	I0924 19:41:05.607484   68524 main.go:141] libmachine: (no-preload-965745) Waiting for machine to stop 119/120
	I0924 19:41:06.608106   68524 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0924 19:41:06.608184   68524 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0924 19:41:06.610191   68524 out.go:201] 
	W0924 19:41:06.611672   68524 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0924 19:41:06.611689   68524 out.go:270] * 
	* 
	W0924 19:41:06.614130   68524 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 19:41:06.615365   68524 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-965745 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965745 -n no-preload-965745
E0924 19:41:08.227999   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:08.234397   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:08.245813   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:08.267678   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:08.309091   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:08.390600   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:08.551903   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965745 -n no-preload-965745: exit status 3 (18.458714064s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:41:25.075212   69119 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.134:22: connect: no route to host
	E0924 19:41:25.075232   69119 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.134:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-965745" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-093771 --alsologtostderr -v=3
E0924 19:39:48.063654   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:48.502236   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:49.790070   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:58.743581   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:13.536378   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:13.542787   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:13.554119   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:13.575491   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:13.616941   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:13.698422   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:13.859933   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:14.182100   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:14.824307   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:16.106050   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:18.667474   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:19.225153   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:23.789234   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:29.025207   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:40:34.030564   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-093771 --alsologtostderr -v=3: exit status 82 (2m0.492170853s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-093771"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 19:39:47.700303   68792 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:39:47.700557   68792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:39:47.700566   68792 out.go:358] Setting ErrFile to fd 2...
	I0924 19:39:47.700571   68792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:39:47.700763   68792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:39:47.701020   68792 out.go:352] Setting JSON to false
	I0924 19:39:47.701096   68792 mustload.go:65] Loading cluster: default-k8s-diff-port-093771
	I0924 19:39:47.701437   68792 config.go:182] Loaded profile config "default-k8s-diff-port-093771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:39:47.701504   68792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/config.json ...
	I0924 19:39:47.701679   68792 mustload.go:65] Loading cluster: default-k8s-diff-port-093771
	I0924 19:39:47.701780   68792 config.go:182] Loaded profile config "default-k8s-diff-port-093771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:39:47.701809   68792 stop.go:39] StopHost: default-k8s-diff-port-093771
	I0924 19:39:47.702172   68792 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:39:47.702222   68792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:39:47.717150   68792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39323
	I0924 19:39:47.717587   68792 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:39:47.718122   68792 main.go:141] libmachine: Using API Version  1
	I0924 19:39:47.718151   68792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:39:47.718465   68792 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:39:47.720791   68792 out.go:177] * Stopping node "default-k8s-diff-port-093771"  ...
	I0924 19:39:47.721953   68792 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0924 19:39:47.721990   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:39:47.722188   68792 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0924 19:39:47.722208   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:39:47.724943   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:39:47.725365   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:38:20 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:39:47.725393   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:39:47.725515   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:39:47.725688   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:39:47.725812   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:39:47.725926   68792 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:39:47.816561   68792 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0924 19:39:47.874009   68792 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0924 19:39:47.946129   68792 main.go:141] libmachine: Stopping "default-k8s-diff-port-093771"...
	I0924 19:39:47.946160   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:39:47.947718   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Stop
	I0924 19:39:47.951214   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 0/120
	I0924 19:39:48.952486   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 1/120
	I0924 19:39:49.953668   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 2/120
	I0924 19:39:50.954938   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 3/120
	I0924 19:39:51.956253   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 4/120
	I0924 19:39:52.958214   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 5/120
	I0924 19:39:53.959669   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 6/120
	I0924 19:39:54.961048   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 7/120
	I0924 19:39:55.962575   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 8/120
	I0924 19:39:56.963884   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 9/120
	I0924 19:39:57.965306   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 10/120
	I0924 19:39:58.966717   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 11/120
	I0924 19:39:59.968044   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 12/120
	I0924 19:40:00.969442   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 13/120
	I0924 19:40:01.970786   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 14/120
	I0924 19:40:02.972655   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 15/120
	I0924 19:40:03.974136   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 16/120
	I0924 19:40:04.975464   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 17/120
	I0924 19:40:05.976874   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 18/120
	I0924 19:40:06.978416   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 19/120
	I0924 19:40:07.980615   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 20/120
	I0924 19:40:08.982000   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 21/120
	I0924 19:40:09.983285   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 22/120
	I0924 19:40:10.984628   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 23/120
	I0924 19:40:11.986107   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 24/120
	I0924 19:40:12.988192   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 25/120
	I0924 19:40:13.989572   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 26/120
	I0924 19:40:14.990929   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 27/120
	I0924 19:40:15.992433   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 28/120
	I0924 19:40:16.993956   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 29/120
	I0924 19:40:17.996259   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 30/120
	I0924 19:40:18.997754   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 31/120
	I0924 19:40:19.999204   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 32/120
	I0924 19:40:21.001418   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 33/120
	I0924 19:40:22.002805   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 34/120
	I0924 19:40:23.004731   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 35/120
	I0924 19:40:24.006156   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 36/120
	I0924 19:40:25.007685   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 37/120
	I0924 19:40:26.009478   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 38/120
	I0924 19:40:27.010930   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 39/120
	I0924 19:40:28.013306   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 40/120
	I0924 19:40:29.014885   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 41/120
	I0924 19:40:30.016352   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 42/120
	I0924 19:40:31.017598   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 43/120
	I0924 19:40:32.019177   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 44/120
	I0924 19:40:33.021172   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 45/120
	I0924 19:40:34.022311   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 46/120
	I0924 19:40:35.023775   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 47/120
	I0924 19:40:36.025007   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 48/120
	I0924 19:40:37.026623   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 49/120
	I0924 19:40:38.028907   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 50/120
	I0924 19:40:39.030211   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 51/120
	I0924 19:40:40.031698   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 52/120
	I0924 19:40:41.032951   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 53/120
	I0924 19:40:42.034230   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 54/120
	I0924 19:40:43.036212   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 55/120
	I0924 19:40:44.037633   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 56/120
	I0924 19:40:45.038881   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 57/120
	I0924 19:40:46.040337   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 58/120
	I0924 19:40:47.041957   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 59/120
	I0924 19:40:48.044494   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 60/120
	I0924 19:40:49.045735   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 61/120
	I0924 19:40:50.047145   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 62/120
	I0924 19:40:51.049311   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 63/120
	I0924 19:40:52.050753   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 64/120
	I0924 19:40:53.053049   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 65/120
	I0924 19:40:54.054318   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 66/120
	I0924 19:40:55.056087   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 67/120
	I0924 19:40:56.057483   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 68/120
	I0924 19:40:57.059202   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 69/120
	I0924 19:40:58.061484   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 70/120
	I0924 19:40:59.063090   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 71/120
	I0924 19:41:00.064597   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 72/120
	I0924 19:41:01.065977   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 73/120
	I0924 19:41:02.067468   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 74/120
	I0924 19:41:03.069847   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 75/120
	I0924 19:41:04.071301   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 76/120
	I0924 19:41:05.072884   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 77/120
	I0924 19:41:06.074584   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 78/120
	I0924 19:41:07.076204   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 79/120
	I0924 19:41:08.078412   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 80/120
	I0924 19:41:09.079840   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 81/120
	I0924 19:41:10.081306   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 82/120
	I0924 19:41:11.082722   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 83/120
	I0924 19:41:12.084020   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 84/120
	I0924 19:41:13.085963   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 85/120
	I0924 19:41:14.087393   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 86/120
	I0924 19:41:15.088979   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 87/120
	I0924 19:41:16.090624   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 88/120
	I0924 19:41:17.092023   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 89/120
	I0924 19:41:18.094198   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 90/120
	I0924 19:41:19.095677   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 91/120
	I0924 19:41:20.096967   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 92/120
	I0924 19:41:21.098614   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 93/120
	I0924 19:41:22.100233   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 94/120
	I0924 19:41:23.102919   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 95/120
	I0924 19:41:24.104434   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 96/120
	I0924 19:41:25.105867   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 97/120
	I0924 19:41:26.107320   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 98/120
	I0924 19:41:27.109025   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 99/120
	I0924 19:41:28.111371   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 100/120
	I0924 19:41:29.113365   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 101/120
	I0924 19:41:30.114814   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 102/120
	I0924 19:41:31.116271   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 103/120
	I0924 19:41:32.117534   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 104/120
	I0924 19:41:33.119477   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 105/120
	I0924 19:41:34.121010   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 106/120
	I0924 19:41:35.122404   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 107/120
	I0924 19:41:36.123880   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 108/120
	I0924 19:41:37.125223   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 109/120
	I0924 19:41:38.127610   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 110/120
	I0924 19:41:39.128838   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 111/120
	I0924 19:41:40.130256   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 112/120
	I0924 19:41:41.131606   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 113/120
	I0924 19:41:42.132985   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 114/120
	I0924 19:41:43.134904   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 115/120
	I0924 19:41:44.136366   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 116/120
	I0924 19:41:45.137750   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 117/120
	I0924 19:41:46.139260   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 118/120
	I0924 19:41:47.140654   68792 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for machine to stop 119/120
	I0924 19:41:48.141547   68792 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0924 19:41:48.141614   68792 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0924 19:41:48.143462   68792 out.go:201] 
	W0924 19:41:48.144762   68792 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0924 19:41:48.144778   68792 out.go:270] * 
	* 
	W0924 19:41:48.147251   68792 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 19:41:48.148463   68792 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-093771 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-093771 -n default-k8s-diff-port-093771
E0924 19:41:49.204867   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:49.793068   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:50.946789   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:54.914544   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:57.223888   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:57.230271   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:57.241593   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:57.262930   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:57.304340   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:57.385784   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:57.547223   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:57.869444   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:58.510806   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:59.792714   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:02.354735   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:05.155924   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-093771 -n default-k8s-diff-port-093771: exit status 3 (18.653533755s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:42:06.803191   69644 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.116:22: connect: no route to host
	E0924 19:42:06.803211   69644 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.116:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-093771" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-311319 -n embed-certs-311319
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-311319 -n embed-certs-311319: exit status 3 (3.199436519s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:41:11.891217   69163 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.21:22: connect: no route to host
	E0924 19:41:11.891245   69163 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.21:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-311319 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0924 19:41:13.359552   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-311319 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153041353s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.21:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-311319 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-311319 -n embed-certs-311319
E0924 19:41:18.481277   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-311319 -n embed-certs-311319: exit status 3 (3.063326089s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:41:21.107258   69360 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.21:22: connect: no route to host
	E0924 19:41:21.107283   69360 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.21:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-311319" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-510301 create -f testdata/busybox.yaml
E0924 19:41:08.873203   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-510301 create -f testdata/busybox.yaml: exit status 1 (42.800587ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-510301" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-510301 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510301 -n old-k8s-version-510301
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510301 -n old-k8s-version-510301: exit status 6 (212.38037ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:41:09.103186   69235 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-510301" does not appear in /home/jenkins/minikube-integration/19700-3751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-510301" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510301 -n old-k8s-version-510301
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510301 -n old-k8s-version-510301: exit status 6 (204.467541ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:41:09.307497   69265 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-510301" does not appear in /home/jenkins/minikube-integration/19700-3751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-510301" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (95.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-510301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0924 19:41:09.515425   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:10.797254   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-510301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m35.419618855s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-510301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-510301 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-510301 describe deploy/metrics-server -n kube-system: exit status 1 (43.138241ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-510301" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-510301 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510301 -n old-k8s-version-510301
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510301 -n old-k8s-version-510301: exit status 6 (213.593229ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:42:44.984306   70036 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-510301" does not appear in /home/jenkins/minikube-integration/19700-3751/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-510301" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (95.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965745 -n no-preload-965745
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965745 -n no-preload-965745: exit status 3 (3.167680044s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:41:28.243214   69465 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.134:22: connect: no route to host
	E0924 19:41:28.243235   69465 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.134:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-965745 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0924 19:41:28.722754   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-965745 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152446268s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.134:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-965745 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965745 -n no-preload-965745
E0924 19:41:35.473991   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965745 -n no-preload-965745: exit status 3 (3.063287589s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:41:37.459254   69546 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.134:22: connect: no route to host
	E0924 19:41:37.459277   69546 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.134:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-965745" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-093771 -n default-k8s-diff-port-093771
E0924 19:42:07.476175   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-093771 -n default-k8s-diff-port-093771: exit status 3 (3.167810368s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:42:09.971192   69758 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.116:22: connect: no route to host
	E0924 19:42:09.971215   69758 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.116:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-093771 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-093771 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152117903s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.116:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-093771 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-093771 -n default-k8s-diff-port-093771
E0924 19:42:17.718144   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-093771 -n default-k8s-diff-port-093771: exit status 3 (3.063669806s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 19:42:19.187240   69857 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.116:22: connect: no route to host
	E0924 19:42:19.187270   69857 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.116:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-093771" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (740.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-510301 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0924 19:42:48.078901   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:57.395380   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:43:06.599924   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:43:08.560362   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:43:19.161195   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:43:47.339217   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:43:49.522657   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:43:52.089428   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:07.085071   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:28.521352   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:34.788510   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:38.248581   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:41.083447   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:44:49.790642   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:45:05.951218   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:45:11.444370   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:45:13.537809   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:45:41.237176   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:08.228064   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:35.931393   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:44.662696   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:46:57.222900   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:47:12.362694   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:47:24.266402   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:47:24.925745   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:47:27.583360   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:47:55.286703   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:49:07.085161   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:49:38.248971   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:49:49.790117   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:50:13.536818   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-510301 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m16.629520923s)

                                                
                                                
-- stdout --
	* [old-k8s-version-510301] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-510301" primary control-plane node in "old-k8s-version-510301" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-510301" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 19:42:46.491955   70152 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:42:46.492212   70152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:42:46.492222   70152 out.go:358] Setting ErrFile to fd 2...
	I0924 19:42:46.492228   70152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:42:46.492386   70152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:42:46.492893   70152 out.go:352] Setting JSON to false
	I0924 19:42:46.493799   70152 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5117,"bootTime":1727201849,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 19:42:46.493899   70152 start.go:139] virtualization: kvm guest
	I0924 19:42:46.496073   70152 out.go:177] * [old-k8s-version-510301] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 19:42:46.497447   70152 notify.go:220] Checking for updates...
	I0924 19:42:46.497466   70152 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 19:42:46.498899   70152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 19:42:46.500315   70152 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:42:46.502038   70152 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:42:46.503591   70152 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 19:42:46.505010   70152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 19:42:46.506789   70152 config.go:182] Loaded profile config "old-k8s-version-510301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:42:46.507239   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:42:46.507282   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:42:46.522338   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43977
	I0924 19:42:46.522810   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:42:46.523430   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:42:46.523450   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:42:46.523809   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:42:46.523989   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:42:46.525830   70152 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 19:42:46.527032   70152 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 19:42:46.527327   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:42:46.527361   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:42:46.542427   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37825
	I0924 19:42:46.542782   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:42:46.543220   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:42:46.543237   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:42:46.543562   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:42:46.543731   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:42:46.577253   70152 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 19:42:46.578471   70152 start.go:297] selected driver: kvm2
	I0924 19:42:46.578486   70152 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:42:46.578620   70152 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 19:42:46.579480   70152 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:42:46.579576   70152 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 19:42:46.595023   70152 install.go:137] /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 19:42:46.595376   70152 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:42:46.595401   70152 cni.go:84] Creating CNI manager for ""
	I0924 19:42:46.595427   70152 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:42:46.595456   70152 start.go:340] cluster config:
	{Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:42:46.595544   70152 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:42:46.597600   70152 out.go:177] * Starting "old-k8s-version-510301" primary control-plane node in "old-k8s-version-510301" cluster
	I0924 19:42:46.599107   70152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:42:46.599145   70152 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 19:42:46.599157   70152 cache.go:56] Caching tarball of preloaded images
	I0924 19:42:46.599232   70152 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 19:42:46.599246   70152 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 19:42:46.599368   70152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json ...
	I0924 19:42:46.599577   70152 start.go:360] acquireMachinesLock for old-k8s-version-510301: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:46:35.823566   70152 start.go:364] duration metric: took 3m49.223945366s to acquireMachinesLock for "old-k8s-version-510301"
	I0924 19:46:35.823654   70152 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:35.823666   70152 fix.go:54] fixHost starting: 
	I0924 19:46:35.824101   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:35.824161   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:35.844327   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38055
	I0924 19:46:35.844741   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:35.845377   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:46:35.845402   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:35.845769   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:35.845997   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:35.846186   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetState
	I0924 19:46:35.847728   70152 fix.go:112] recreateIfNeeded on old-k8s-version-510301: state=Stopped err=<nil>
	I0924 19:46:35.847754   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	W0924 19:46:35.847912   70152 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:35.849981   70152 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-510301" ...
	I0924 19:46:35.851388   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .Start
	I0924 19:46:35.851573   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring networks are active...
	I0924 19:46:35.852445   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring network default is active
	I0924 19:46:35.852832   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring network mk-old-k8s-version-510301 is active
	I0924 19:46:35.853342   70152 main.go:141] libmachine: (old-k8s-version-510301) Getting domain xml...
	I0924 19:46:35.854028   70152 main.go:141] libmachine: (old-k8s-version-510301) Creating domain...
	I0924 19:46:37.184620   70152 main.go:141] libmachine: (old-k8s-version-510301) Waiting to get IP...
	I0924 19:46:37.185660   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.186074   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.186151   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.186052   71118 retry.go:31] will retry after 294.949392ms: waiting for machine to come up
	I0924 19:46:37.482814   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.483327   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.483356   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.483268   71118 retry.go:31] will retry after 344.498534ms: waiting for machine to come up
	I0924 19:46:37.830045   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.830715   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.830748   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.830647   71118 retry.go:31] will retry after 342.025563ms: waiting for machine to come up
	I0924 19:46:38.174408   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:38.176008   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:38.176040   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:38.175906   71118 retry.go:31] will retry after 456.814011ms: waiting for machine to come up
	I0924 19:46:38.634792   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:38.635533   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:38.635566   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:38.635443   71118 retry.go:31] will retry after 582.88697ms: waiting for machine to come up
	I0924 19:46:39.220373   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:39.220869   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:39.220899   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:39.220811   71118 retry.go:31] will retry after 648.981338ms: waiting for machine to come up
	I0924 19:46:39.872016   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:39.872615   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:39.872645   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:39.872571   71118 retry.go:31] will retry after 1.138842254s: waiting for machine to come up
	I0924 19:46:41.012974   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:41.013539   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:41.013575   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:41.013489   71118 retry.go:31] will retry after 996.193977ms: waiting for machine to come up
	I0924 19:46:42.011562   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:42.011963   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:42.011986   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:42.011932   71118 retry.go:31] will retry after 1.827996528s: waiting for machine to come up
	I0924 19:46:43.841529   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:43.842075   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:43.842106   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:43.842030   71118 retry.go:31] will retry after 2.224896366s: waiting for machine to come up
	I0924 19:46:46.068290   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:46.068761   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:46.068784   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:46.068736   71118 retry.go:31] will retry after 2.630690322s: waiting for machine to come up
	I0924 19:46:48.700927   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:48.701338   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:48.701367   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:48.701291   71118 retry.go:31] will retry after 3.546152526s: waiting for machine to come up
	I0924 19:46:52.249370   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.249921   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has current primary IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.249953   70152 main.go:141] libmachine: (old-k8s-version-510301) Found IP for machine: 192.168.72.81
	I0924 19:46:52.249967   70152 main.go:141] libmachine: (old-k8s-version-510301) Reserving static IP address...
	I0924 19:46:52.250395   70152 main.go:141] libmachine: (old-k8s-version-510301) Reserved static IP address: 192.168.72.81
	I0924 19:46:52.250438   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "old-k8s-version-510301", mac: "52:54:00:72:11:f0", ip: "192.168.72.81"} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.250453   70152 main.go:141] libmachine: (old-k8s-version-510301) Waiting for SSH to be available...
	I0924 19:46:52.250479   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | skip adding static IP to network mk-old-k8s-version-510301 - found existing host DHCP lease matching {name: "old-k8s-version-510301", mac: "52:54:00:72:11:f0", ip: "192.168.72.81"}
	I0924 19:46:52.250492   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Getting to WaitForSSH function...
	I0924 19:46:52.252807   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.253148   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.253176   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.253278   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using SSH client type: external
	I0924 19:46:52.253300   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa (-rw-------)
	I0924 19:46:52.253332   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:52.253345   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | About to run SSH command:
	I0924 19:46:52.253354   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | exit 0
	I0924 19:46:52.378625   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:52.379067   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetConfigRaw
	I0924 19:46:52.379793   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:52.382222   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.382618   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.382647   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.382925   70152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json ...
	I0924 19:46:52.383148   70152 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:52.383174   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:52.383374   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.385984   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.386434   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.386460   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.386614   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.386788   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.387002   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.387167   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.387396   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.387632   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.387645   70152 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:52.503003   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:52.503033   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.503320   70152 buildroot.go:166] provisioning hostname "old-k8s-version-510301"
	I0924 19:46:52.503344   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.503630   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.506502   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.506817   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.506858   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.507028   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.507216   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.507394   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.507584   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.507792   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.508016   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.508034   70152 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-510301 && echo "old-k8s-version-510301" | sudo tee /etc/hostname
	I0924 19:46:52.634014   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-510301
	
	I0924 19:46:52.634040   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.636807   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.637156   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.637186   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.637331   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.637528   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.637721   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.637866   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.638016   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.638228   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.638252   70152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-510301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-510301/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-510301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:52.754583   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:52.754613   70152 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:52.754645   70152 buildroot.go:174] setting up certificates
	I0924 19:46:52.754653   70152 provision.go:84] configureAuth start
	I0924 19:46:52.754664   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.754975   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:52.757674   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.758024   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.758047   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.758158   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.760405   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.760722   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.760751   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.760869   70152 provision.go:143] copyHostCerts
	I0924 19:46:52.760928   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:52.760942   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:52.761009   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:52.761125   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:52.761141   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:52.761180   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:52.761262   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:52.761274   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:52.761301   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:52.761375   70152 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-510301 san=[127.0.0.1 192.168.72.81 localhost minikube old-k8s-version-510301]
	I0924 19:46:52.906522   70152 provision.go:177] copyRemoteCerts
	I0924 19:46:52.906586   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:52.906606   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.909264   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.909580   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.909622   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.909777   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.909960   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.910206   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.910313   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:52.997129   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:53.020405   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 19:46:53.042194   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:53.063422   70152 provision.go:87] duration metric: took 308.753857ms to configureAuth
	I0924 19:46:53.063448   70152 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:53.063662   70152 config.go:182] Loaded profile config "old-k8s-version-510301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:46:53.063752   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.066435   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.066850   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.066877   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.067076   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.067247   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.067382   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.067546   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.067749   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:53.067935   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:53.067958   70152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:53.288436   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:53.288463   70152 machine.go:96] duration metric: took 905.298763ms to provisionDockerMachine
	I0924 19:46:53.288476   70152 start.go:293] postStartSetup for "old-k8s-version-510301" (driver="kvm2")
	I0924 19:46:53.288486   70152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:53.288513   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.288841   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:53.288869   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.291363   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.291643   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.291660   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.291867   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.292054   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.292210   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.292337   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.372984   70152 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:53.377049   70152 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:53.377072   70152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:53.377158   70152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:53.377250   70152 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:53.377339   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:53.385950   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:53.408609   70152 start.go:296] duration metric: took 120.112789ms for postStartSetup
	I0924 19:46:53.408654   70152 fix.go:56] duration metric: took 17.584988201s for fixHost
	I0924 19:46:53.408677   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.411723   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.412100   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.412124   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.412309   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.412544   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.412752   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.412892   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.413075   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:53.413260   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:53.413272   70152 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:53.519060   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207213.488062061
	
	I0924 19:46:53.519081   70152 fix.go:216] guest clock: 1727207213.488062061
	I0924 19:46:53.519090   70152 fix.go:229] Guest: 2024-09-24 19:46:53.488062061 +0000 UTC Remote: 2024-09-24 19:46:53.408658589 +0000 UTC m=+246.951196346 (delta=79.403472ms)
	I0924 19:46:53.519120   70152 fix.go:200] guest clock delta is within tolerance: 79.403472ms
	I0924 19:46:53.519127   70152 start.go:83] releasing machines lock for "old-k8s-version-510301", held for 17.695500754s
	I0924 19:46:53.519158   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.519439   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:53.522059   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.522454   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.522483   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.522639   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523144   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523344   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523432   70152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:53.523470   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.523577   70152 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:53.523614   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.526336   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.526804   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.526845   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.526874   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.527024   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.527216   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.527354   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.527358   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.527382   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.527484   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.527599   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.527742   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.527925   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.528073   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.625956   70152 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:53.631927   70152 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:53.769800   70152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:53.776028   70152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:53.776076   70152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:53.792442   70152 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:53.792476   70152 start.go:495] detecting cgroup driver to use...
	I0924 19:46:53.792558   70152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:53.813239   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:53.827951   70152 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:53.828011   70152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:53.840962   70152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:53.853498   70152 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:53.957380   70152 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:54.123019   70152 docker.go:233] disabling docker service ...
	I0924 19:46:54.123087   70152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:54.138033   70152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:54.153414   70152 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:54.286761   70152 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:54.411013   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:54.432184   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:54.449924   70152 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 19:46:54.450001   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.459689   70152 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:54.459745   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.469555   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.480875   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.490860   70152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:54.503933   70152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:54.513383   70152 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:54.513444   70152 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:54.527180   70152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:54.539778   70152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:54.676320   70152 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:54.774914   70152 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:54.775027   70152 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:54.780383   70152 start.go:563] Will wait 60s for crictl version
	I0924 19:46:54.780457   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:54.785066   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:54.825711   70152 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:54.825792   70152 ssh_runner.go:195] Run: crio --version
	I0924 19:46:54.861643   70152 ssh_runner.go:195] Run: crio --version
	I0924 19:46:54.905425   70152 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 19:46:54.906585   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:54.909353   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:54.909736   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:54.909766   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:54.909970   70152 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:54.915290   70152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:54.927316   70152 kubeadm.go:883] updating cluster {Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:54.927427   70152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:46:54.927465   70152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:54.971020   70152 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:46:54.971090   70152 ssh_runner.go:195] Run: which lz4
	I0924 19:46:54.975775   70152 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:46:54.979807   70152 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:46:54.979865   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 19:46:56.372682   70152 crio.go:462] duration metric: took 1.396951861s to copy over tarball
	I0924 19:46:56.372750   70152 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:46:59.298055   70152 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.925272101s)
	I0924 19:46:59.298082   70152 crio.go:469] duration metric: took 2.925375511s to extract the tarball
	I0924 19:46:59.298091   70152 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:46:59.340896   70152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:59.374335   70152 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:46:59.374358   70152 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 19:46:59.374431   70152 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:59.374463   70152 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.374468   70152 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.374489   70152 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.374514   70152 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.374434   70152 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.374582   70152 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.374624   70152 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 19:46:59.375796   70152 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.375857   70152 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.375925   70152 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.375869   70152 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.376062   70152 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.376154   70152 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:59.376357   70152 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.376419   70152 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 19:46:59.521289   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.525037   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.526549   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.536791   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.545312   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.553847   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 19:46:59.558387   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.611119   70152 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 19:46:59.611167   70152 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.611219   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.659190   70152 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 19:46:59.659234   70152 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.659282   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.660489   70152 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 19:46:59.660522   70152 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 19:46:59.660529   70152 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.660558   70152 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.660591   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.660596   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.686686   70152 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 19:46:59.686728   70152 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.686777   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698274   70152 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 19:46:59.698313   70152 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 19:46:59.698366   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698379   70152 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 19:46:59.698410   70152 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.698449   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698451   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.698462   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.698523   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.698527   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.698573   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.795169   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.795179   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.795201   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.805639   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.817474   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.817485   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.817538   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:46:59.917772   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.921025   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.929651   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.955330   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.955344   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.969966   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:46:59.969966   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:47:00.058059   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 19:47:00.058134   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 19:47:00.058178   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 19:47:00.078489   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 19:47:00.078543   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 19:47:00.091137   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:47:00.091212   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:47:00.132385   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 19:47:00.140154   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 19:47:00.328511   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:47:00.468550   70152 cache_images.go:92] duration metric: took 1.094174976s to LoadCachedImages
	W0924 19:47:00.468674   70152 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0924 19:47:00.468693   70152 kubeadm.go:934] updating node { 192.168.72.81 8443 v1.20.0 crio true true} ...
	I0924 19:47:00.468831   70152 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-510301 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:47:00.468918   70152 ssh_runner.go:195] Run: crio config
	I0924 19:47:00.521799   70152 cni.go:84] Creating CNI manager for ""
	I0924 19:47:00.521826   70152 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:00.521836   70152 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:47:00.521858   70152 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-510301 NodeName:old-k8s-version-510301 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 19:47:00.521992   70152 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-510301"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:47:00.522051   70152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 19:47:00.534799   70152 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:47:00.534888   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:47:00.546863   70152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0924 19:47:00.565623   70152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:47:00.583242   70152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0924 19:47:00.600113   70152 ssh_runner.go:195] Run: grep 192.168.72.81	control-plane.minikube.internal$ /etc/hosts
	I0924 19:47:00.603653   70152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:00.618699   70152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:00.746348   70152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:47:00.767201   70152 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301 for IP: 192.168.72.81
	I0924 19:47:00.767228   70152 certs.go:194] generating shared ca certs ...
	I0924 19:47:00.767246   70152 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:00.767418   70152 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:47:00.767468   70152 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:47:00.767482   70152 certs.go:256] generating profile certs ...
	I0924 19:47:00.767607   70152 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/client.key
	I0924 19:47:00.767675   70152 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key.32de9897
	I0924 19:47:00.767726   70152 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key
	I0924 19:47:00.767866   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:47:00.767903   70152 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:47:00.767916   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:47:00.767950   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:47:00.767980   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:47:00.768013   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:47:00.768064   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:00.768651   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:47:00.819295   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:47:00.858368   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:47:00.903694   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:47:00.930441   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 19:47:00.960346   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 19:47:00.988938   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:47:01.014165   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:47:01.038384   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:47:01.061430   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:47:01.083761   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:47:01.105996   70152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:47:01.121529   70152 ssh_runner.go:195] Run: openssl version
	I0924 19:47:01.127294   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:47:01.139547   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.143897   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.143956   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.149555   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:47:01.159823   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:47:01.170730   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.175500   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.175635   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.181445   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:47:01.194810   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:47:01.205193   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.209256   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.209316   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.214946   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:47:01.225368   70152 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:47:01.229833   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:47:01.235652   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:47:01.241158   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:47:01.248213   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:47:01.255001   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:47:01.262990   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:47:01.270069   70152 kubeadm.go:392] StartCluster: {Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:47:01.270166   70152 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:47:01.270211   70152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:01.310648   70152 cri.go:89] found id: ""
	I0924 19:47:01.310759   70152 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:47:01.321111   70152 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:47:01.321133   70152 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:47:01.321182   70152 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:47:01.330754   70152 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:47:01.331880   70152 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-510301" does not appear in /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:47:01.332435   70152 kubeconfig.go:62] /home/jenkins/minikube-integration/19700-3751/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-510301" cluster setting kubeconfig missing "old-k8s-version-510301" context setting]
	I0924 19:47:01.333336   70152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:01.390049   70152 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:47:01.402246   70152 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.81
	I0924 19:47:01.402281   70152 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:47:01.402295   70152 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:47:01.402346   70152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:01.443778   70152 cri.go:89] found id: ""
	I0924 19:47:01.443851   70152 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:47:01.459836   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:47:01.469392   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:47:01.469414   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:47:01.469454   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:47:01.480329   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:47:01.480402   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:47:01.489799   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:47:01.499967   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:47:01.500023   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:47:01.508842   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:47:01.517564   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:47:01.517620   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:47:01.527204   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:47:01.536656   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:47:01.536718   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:47:01.546282   70152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:47:01.555548   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:01.755130   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.379331   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.601177   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.739476   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.829258   70152 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:47:02.829347   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:03.330254   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:03.830452   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.329738   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.829469   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:05.329754   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:05.830117   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:06.329834   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:06.830043   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:07.330209   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:07.830432   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:08.329603   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:08.829525   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.330455   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.830130   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:10.329475   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:10.829474   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:11.330269   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:11.830448   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:12.330373   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:12.830050   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.329571   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.829489   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:14.329728   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:14.829674   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:15.329673   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:15.829570   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:16.330102   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:16.829867   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.329440   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.830132   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:18.329512   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:18.829524   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:19.329716   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:19.829496   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:20.329702   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:20.830155   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:21.330292   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:21.829987   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.329630   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.830041   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:23.330430   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:23.829696   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:24.329494   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:24.830212   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:25.330402   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:25.829827   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:26.329541   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:26.830122   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:27.329632   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:27.829858   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:28.329762   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:28.829476   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.330221   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.829642   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:30.329491   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:30.830098   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:31.329499   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:31.830201   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:32.330017   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:32.829654   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:33.329718   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:33.830007   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.329683   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.829441   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:35.329848   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:35.829899   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:36.330437   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:36.830372   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:37.330124   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:37.829745   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:38.329476   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:38.830138   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.329657   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.829850   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:40.330083   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:40.829903   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:41.329650   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:41.829413   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:42.329658   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:42.829718   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:43.330413   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:43.830374   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.329633   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.829479   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:45.330059   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:45.829818   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:46.330216   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:46.830337   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:47.330269   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:47.829573   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:48.329440   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:48.829923   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.329742   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.829771   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:50.329793   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:50.829379   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:51.329385   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:51.829989   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:52.329456   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:52.830395   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:53.330348   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:53.829385   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.329667   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.830290   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:55.330430   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:55.829909   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:56.330041   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:56.829842   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:57.329904   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:57.829402   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:58.329848   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:58.830403   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.330062   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.829904   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:00.329651   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:00.829451   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:01.330427   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:01.830104   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:02.330085   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:02.830241   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:02.830313   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:02.863389   70152 cri.go:89] found id: ""
	I0924 19:48:02.863421   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.863432   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:02.863440   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:02.863501   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:02.903587   70152 cri.go:89] found id: ""
	I0924 19:48:02.903615   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.903627   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:02.903634   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:02.903691   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:02.936090   70152 cri.go:89] found id: ""
	I0924 19:48:02.936117   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.936132   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:02.936138   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:02.936197   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:02.970010   70152 cri.go:89] found id: ""
	I0924 19:48:02.970034   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.970042   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:02.970047   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:02.970094   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:03.005123   70152 cri.go:89] found id: ""
	I0924 19:48:03.005146   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.005156   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:03.005164   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:03.005224   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:03.037142   70152 cri.go:89] found id: ""
	I0924 19:48:03.037185   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.037214   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:03.037223   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:03.037289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:03.071574   70152 cri.go:89] found id: ""
	I0924 19:48:03.071605   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.071616   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:03.071644   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:03.071710   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:03.101682   70152 cri.go:89] found id: ""
	I0924 19:48:03.101710   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.101718   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:03.101727   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:03.101737   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:03.145955   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:03.145982   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:03.194495   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:03.194531   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:03.207309   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:03.207344   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:03.318709   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:03.318736   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:03.318751   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:05.897472   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:05.910569   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:05.910633   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:05.972008   70152 cri.go:89] found id: ""
	I0924 19:48:05.972047   70152 logs.go:276] 0 containers: []
	W0924 19:48:05.972059   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:05.972066   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:05.972128   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:06.021928   70152 cri.go:89] found id: ""
	I0924 19:48:06.021954   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.021961   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:06.021967   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:06.022018   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:06.054871   70152 cri.go:89] found id: ""
	I0924 19:48:06.054910   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.054919   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:06.054924   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:06.054979   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:06.087218   70152 cri.go:89] found id: ""
	I0924 19:48:06.087242   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.087253   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:06.087261   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:06.087312   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:06.120137   70152 cri.go:89] found id: ""
	I0924 19:48:06.120162   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.120170   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:06.120176   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:06.120222   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:06.150804   70152 cri.go:89] found id: ""
	I0924 19:48:06.150842   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.150854   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:06.150862   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:06.150911   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:06.189829   70152 cri.go:89] found id: ""
	I0924 19:48:06.189856   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.189864   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:06.189870   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:06.189920   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:06.224712   70152 cri.go:89] found id: ""
	I0924 19:48:06.224739   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.224747   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:06.224755   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:06.224769   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:06.290644   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:06.290669   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:06.290681   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:06.369393   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:06.369427   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:06.404570   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:06.404601   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:06.456259   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:06.456288   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:08.969378   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:08.982058   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:08.982129   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:09.015453   70152 cri.go:89] found id: ""
	I0924 19:48:09.015475   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.015484   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:09.015489   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:09.015535   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:09.046308   70152 cri.go:89] found id: ""
	I0924 19:48:09.046332   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.046343   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:09.046350   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:09.046412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:09.077263   70152 cri.go:89] found id: ""
	I0924 19:48:09.077296   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.077308   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:09.077315   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:09.077373   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:09.109224   70152 cri.go:89] found id: ""
	I0924 19:48:09.109255   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.109267   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:09.109274   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:09.109342   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:09.144346   70152 cri.go:89] found id: ""
	I0924 19:48:09.144370   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.144378   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:09.144383   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:09.144434   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:09.175798   70152 cri.go:89] found id: ""
	I0924 19:48:09.175827   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.175843   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:09.175854   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:09.175923   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:09.211912   70152 cri.go:89] found id: ""
	I0924 19:48:09.211935   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.211942   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:09.211948   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:09.211996   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:09.242068   70152 cri.go:89] found id: ""
	I0924 19:48:09.242099   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.242110   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:09.242121   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:09.242134   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:09.306677   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:09.306696   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:09.306707   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:09.384544   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:09.384598   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:09.419555   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:09.419583   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:09.470699   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:09.470731   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:11.984355   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:11.997823   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:11.997879   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:12.029976   70152 cri.go:89] found id: ""
	I0924 19:48:12.030009   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.030021   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:12.030041   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:12.030187   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:12.061131   70152 cri.go:89] found id: ""
	I0924 19:48:12.061157   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.061165   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:12.061170   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:12.061223   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:12.091952   70152 cri.go:89] found id: ""
	I0924 19:48:12.091978   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.091986   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:12.091992   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:12.092039   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:12.127561   70152 cri.go:89] found id: ""
	I0924 19:48:12.127586   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.127597   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:12.127604   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:12.127688   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:12.157342   70152 cri.go:89] found id: ""
	I0924 19:48:12.157363   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.157371   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:12.157377   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:12.157449   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:12.188059   70152 cri.go:89] found id: ""
	I0924 19:48:12.188090   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.188101   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:12.188109   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:12.188163   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:12.222357   70152 cri.go:89] found id: ""
	I0924 19:48:12.222380   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.222388   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:12.222398   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:12.222456   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:12.252715   70152 cri.go:89] found id: ""
	I0924 19:48:12.252736   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.252743   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:12.252751   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:12.252761   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:12.302913   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:12.302943   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:12.315812   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:12.315840   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:12.392300   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:12.392322   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:12.392333   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:12.475042   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:12.475081   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:15.013852   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:15.026515   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:15.026586   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:15.057967   70152 cri.go:89] found id: ""
	I0924 19:48:15.057993   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.058001   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:15.058008   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:15.058063   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:15.092822   70152 cri.go:89] found id: ""
	I0924 19:48:15.092852   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.092860   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:15.092866   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:15.092914   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:15.127847   70152 cri.go:89] found id: ""
	I0924 19:48:15.127875   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.127884   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:15.127889   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:15.127941   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:15.159941   70152 cri.go:89] found id: ""
	I0924 19:48:15.159967   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.159975   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:15.159981   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:15.160035   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:15.192384   70152 cri.go:89] found id: ""
	I0924 19:48:15.192411   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.192422   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:15.192428   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:15.192481   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:15.225446   70152 cri.go:89] found id: ""
	I0924 19:48:15.225472   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.225482   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:15.225488   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:15.225546   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:15.257292   70152 cri.go:89] found id: ""
	I0924 19:48:15.257312   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.257320   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:15.257326   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:15.257377   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:15.288039   70152 cri.go:89] found id: ""
	I0924 19:48:15.288073   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.288085   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:15.288096   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:15.288110   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:15.300593   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:15.300619   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:15.365453   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:15.365482   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:15.365497   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:15.442405   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:15.442440   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:15.481003   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:15.481033   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:18.031802   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:18.044013   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:18.044070   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:18.076333   70152 cri.go:89] found id: ""
	I0924 19:48:18.076357   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.076365   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:18.076371   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:18.076421   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:18.110333   70152 cri.go:89] found id: ""
	I0924 19:48:18.110367   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.110379   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:18.110386   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:18.110457   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:18.142730   70152 cri.go:89] found id: ""
	I0924 19:48:18.142755   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.142763   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:18.142769   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:18.142848   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:18.174527   70152 cri.go:89] found id: ""
	I0924 19:48:18.174551   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.174561   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:18.174568   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:18.174623   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:18.213873   70152 cri.go:89] found id: ""
	I0924 19:48:18.213904   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.213916   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:18.213923   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:18.214019   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:18.247037   70152 cri.go:89] found id: ""
	I0924 19:48:18.247069   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.247079   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:18.247087   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:18.247167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:18.278275   70152 cri.go:89] found id: ""
	I0924 19:48:18.278302   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.278313   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:18.278319   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:18.278377   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:18.311651   70152 cri.go:89] found id: ""
	I0924 19:48:18.311679   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.311690   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:18.311702   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:18.311714   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:18.365113   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:18.365144   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:18.378675   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:18.378702   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:18.450306   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:18.450339   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:18.450353   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:18.529373   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:18.529420   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:21.065169   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:21.077517   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:21.077579   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:21.112639   70152 cri.go:89] found id: ""
	I0924 19:48:21.112663   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.112671   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:21.112677   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:21.112729   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:21.144587   70152 cri.go:89] found id: ""
	I0924 19:48:21.144608   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.144616   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:21.144625   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:21.144675   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:21.175675   70152 cri.go:89] found id: ""
	I0924 19:48:21.175697   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.175705   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:21.175710   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:21.175760   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:21.207022   70152 cri.go:89] found id: ""
	I0924 19:48:21.207044   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.207053   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:21.207058   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:21.207108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:21.238075   70152 cri.go:89] found id: ""
	I0924 19:48:21.238106   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.238118   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:21.238125   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:21.238188   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:21.269998   70152 cri.go:89] found id: ""
	I0924 19:48:21.270030   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.270040   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:21.270048   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:21.270108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:21.301274   70152 cri.go:89] found id: ""
	I0924 19:48:21.301303   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.301315   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:21.301323   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:21.301389   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:21.332082   70152 cri.go:89] found id: ""
	I0924 19:48:21.332107   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.332115   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:21.332123   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:21.332133   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:21.383713   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:21.383759   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:21.396926   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:21.396950   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:21.465280   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:21.465306   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:21.465321   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:21.544724   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:21.544760   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:24.083632   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:24.095853   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:24.095909   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:24.126692   70152 cri.go:89] found id: ""
	I0924 19:48:24.126718   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.126732   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:24.126739   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:24.126794   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:24.157451   70152 cri.go:89] found id: ""
	I0924 19:48:24.157478   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.157490   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:24.157498   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:24.157548   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:24.188313   70152 cri.go:89] found id: ""
	I0924 19:48:24.188340   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.188351   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:24.188359   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:24.188406   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:24.218240   70152 cri.go:89] found id: ""
	I0924 19:48:24.218271   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.218283   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:24.218291   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:24.218348   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:24.249281   70152 cri.go:89] found id: ""
	I0924 19:48:24.249313   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.249324   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:24.249331   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:24.249391   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:24.280160   70152 cri.go:89] found id: ""
	I0924 19:48:24.280182   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.280189   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:24.280194   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:24.280246   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:24.310699   70152 cri.go:89] found id: ""
	I0924 19:48:24.310726   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.310735   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:24.310740   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:24.310792   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:24.346673   70152 cri.go:89] found id: ""
	I0924 19:48:24.346703   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.346715   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:24.346725   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:24.346738   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:24.396068   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:24.396100   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:24.408987   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:24.409014   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:24.477766   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:24.477792   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:24.477805   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:24.556507   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:24.556539   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:27.099161   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:27.110953   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:27.111027   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:27.143812   70152 cri.go:89] found id: ""
	I0924 19:48:27.143838   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.143846   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:27.143852   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:27.143909   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:27.173741   70152 cri.go:89] found id: ""
	I0924 19:48:27.173766   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.173775   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:27.173780   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:27.173835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:27.203089   70152 cri.go:89] found id: ""
	I0924 19:48:27.203118   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.203128   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:27.203135   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:27.203197   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:27.234206   70152 cri.go:89] found id: ""
	I0924 19:48:27.234232   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.234240   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:27.234247   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:27.234298   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:27.265173   70152 cri.go:89] found id: ""
	I0924 19:48:27.265199   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.265207   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:27.265213   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:27.265274   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:27.294683   70152 cri.go:89] found id: ""
	I0924 19:48:27.294711   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.294722   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:27.294737   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:27.294800   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:27.327766   70152 cri.go:89] found id: ""
	I0924 19:48:27.327796   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.327804   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:27.327810   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:27.327867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:27.358896   70152 cri.go:89] found id: ""
	I0924 19:48:27.358922   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.358932   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:27.358943   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:27.358958   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:27.407245   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:27.407281   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:27.420301   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:27.420333   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:27.483150   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:27.483175   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:27.483190   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:27.558952   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:27.558988   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:30.094672   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:30.107997   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:30.108061   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:30.141210   70152 cri.go:89] found id: ""
	I0924 19:48:30.141238   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.141248   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:30.141256   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:30.141319   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:30.173799   70152 cri.go:89] found id: ""
	I0924 19:48:30.173825   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.173833   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:30.173839   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:30.173900   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:30.206653   70152 cri.go:89] found id: ""
	I0924 19:48:30.206676   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.206684   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:30.206690   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:30.206739   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:30.245268   70152 cri.go:89] found id: ""
	I0924 19:48:30.245296   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.245351   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:30.245363   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:30.245424   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:30.277515   70152 cri.go:89] found id: ""
	I0924 19:48:30.277550   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.277570   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:30.277578   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:30.277646   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:30.309533   70152 cri.go:89] found id: ""
	I0924 19:48:30.309556   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.309564   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:30.309576   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:30.309641   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:30.342113   70152 cri.go:89] found id: ""
	I0924 19:48:30.342133   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.342140   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:30.342146   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:30.342204   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:30.377786   70152 cri.go:89] found id: ""
	I0924 19:48:30.377818   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.377827   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:30.377835   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:30.377846   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:30.429612   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:30.429660   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:30.442864   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:30.442892   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:30.508899   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:30.508917   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:30.508928   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:30.585285   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:30.585316   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:33.125617   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:33.137771   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:33.137847   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:33.169654   70152 cri.go:89] found id: ""
	I0924 19:48:33.169684   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.169694   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:33.169703   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:33.169769   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:33.205853   70152 cri.go:89] found id: ""
	I0924 19:48:33.205877   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.205884   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:33.205890   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:33.205947   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:33.239008   70152 cri.go:89] found id: ""
	I0924 19:48:33.239037   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.239048   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:33.239056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:33.239114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:33.269045   70152 cri.go:89] found id: ""
	I0924 19:48:33.269077   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.269088   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:33.269096   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:33.269158   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:33.298553   70152 cri.go:89] found id: ""
	I0924 19:48:33.298583   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.298594   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:33.298602   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:33.298663   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:33.329077   70152 cri.go:89] found id: ""
	I0924 19:48:33.329103   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.329114   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:33.329122   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:33.329181   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:33.361366   70152 cri.go:89] found id: ""
	I0924 19:48:33.361397   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.361408   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:33.361416   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:33.361465   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:33.394899   70152 cri.go:89] found id: ""
	I0924 19:48:33.394941   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.394952   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:33.394964   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:33.394978   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:33.446878   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:33.446917   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:33.460382   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:33.460408   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:33.530526   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:33.530546   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:33.530563   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:33.610520   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:33.610559   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:36.152137   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:36.165157   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:36.165225   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:36.196113   70152 cri.go:89] found id: ""
	I0924 19:48:36.196142   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.196151   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:36.196159   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:36.196223   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:36.230743   70152 cri.go:89] found id: ""
	I0924 19:48:36.230770   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.230779   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:36.230786   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:36.230870   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:36.263401   70152 cri.go:89] found id: ""
	I0924 19:48:36.263430   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.263439   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:36.263444   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:36.263492   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:36.298958   70152 cri.go:89] found id: ""
	I0924 19:48:36.298982   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.298991   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:36.298996   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:36.299053   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:36.337604   70152 cri.go:89] found id: ""
	I0924 19:48:36.337636   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.337647   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:36.337654   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:36.337717   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:36.368707   70152 cri.go:89] found id: ""
	I0924 19:48:36.368738   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.368749   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:36.368763   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:36.368833   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:36.400169   70152 cri.go:89] found id: ""
	I0924 19:48:36.400194   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.400204   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:36.400212   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:36.400277   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:36.430959   70152 cri.go:89] found id: ""
	I0924 19:48:36.430987   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.430994   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:36.431003   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:36.431015   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:36.508356   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:36.508381   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:36.508392   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:36.589376   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:36.589411   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:36.629423   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:36.629453   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:36.679281   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:36.679313   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:39.193627   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:39.207486   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:39.207564   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:39.239864   70152 cri.go:89] found id: ""
	I0924 19:48:39.239888   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.239897   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:39.239902   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:39.239950   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:39.273596   70152 cri.go:89] found id: ""
	I0924 19:48:39.273622   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.273630   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:39.273635   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:39.273685   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:39.305659   70152 cri.go:89] found id: ""
	I0924 19:48:39.305685   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.305696   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:39.305703   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:39.305762   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:39.338060   70152 cri.go:89] found id: ""
	I0924 19:48:39.338091   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.338103   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:39.338110   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:39.338167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:39.369652   70152 cri.go:89] found id: ""
	I0924 19:48:39.369680   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.369688   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:39.369694   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:39.369757   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:39.406342   70152 cri.go:89] found id: ""
	I0924 19:48:39.406365   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.406373   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:39.406379   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:39.406428   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:39.437801   70152 cri.go:89] found id: ""
	I0924 19:48:39.437824   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.437832   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:39.437838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:39.437892   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:39.476627   70152 cri.go:89] found id: ""
	I0924 19:48:39.476651   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.476662   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:39.476672   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:39.476685   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:39.528302   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:39.528332   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:39.540968   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:39.540999   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:39.606690   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:39.606716   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:39.606733   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:39.689060   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:39.689101   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:42.225445   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:42.238188   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:42.238262   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:42.270077   70152 cri.go:89] found id: ""
	I0924 19:48:42.270107   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.270117   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:42.270127   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:42.270189   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:42.301231   70152 cri.go:89] found id: ""
	I0924 19:48:42.301253   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.301261   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:42.301266   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:42.301311   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:42.331554   70152 cri.go:89] found id: ""
	I0924 19:48:42.331586   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.331594   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:42.331602   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:42.331662   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:42.364673   70152 cri.go:89] found id: ""
	I0924 19:48:42.364696   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.364704   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:42.364710   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:42.364755   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:42.396290   70152 cri.go:89] found id: ""
	I0924 19:48:42.396320   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.396331   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:42.396339   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:42.396400   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:42.427249   70152 cri.go:89] found id: ""
	I0924 19:48:42.427277   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.427287   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:42.427295   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:42.427356   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:42.462466   70152 cri.go:89] found id: ""
	I0924 19:48:42.462491   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.462499   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:42.462504   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:42.462557   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:42.496774   70152 cri.go:89] found id: ""
	I0924 19:48:42.496797   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.496805   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:42.496813   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:42.496825   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:42.569996   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:42.570024   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:42.570040   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:42.646881   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:42.646913   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:42.687089   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:42.687112   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:42.739266   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:42.739303   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:45.254320   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:45.266332   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:45.266404   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:45.296893   70152 cri.go:89] found id: ""
	I0924 19:48:45.296923   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.296933   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:45.296940   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:45.297003   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:45.328599   70152 cri.go:89] found id: ""
	I0924 19:48:45.328628   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.328639   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:45.328647   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:45.328704   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:45.361362   70152 cri.go:89] found id: ""
	I0924 19:48:45.361394   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.361404   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:45.361414   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:45.361475   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:45.395296   70152 cri.go:89] found id: ""
	I0924 19:48:45.395341   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.395352   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:45.395360   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:45.395424   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:45.430070   70152 cri.go:89] found id: ""
	I0924 19:48:45.430092   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.430100   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:45.430106   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:45.430151   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:45.463979   70152 cri.go:89] found id: ""
	I0924 19:48:45.464005   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.464015   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:45.464023   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:45.464085   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:45.512245   70152 cri.go:89] found id: ""
	I0924 19:48:45.512276   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.512286   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:45.512293   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:45.512353   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:45.544854   70152 cri.go:89] found id: ""
	I0924 19:48:45.544882   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.544891   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:45.544902   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:45.544915   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:45.580352   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:45.580390   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:45.630992   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:45.631025   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:45.643908   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:45.643936   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:45.715669   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:45.715689   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:45.715703   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:48.296204   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:48.308612   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:48.308675   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:48.339308   70152 cri.go:89] found id: ""
	I0924 19:48:48.339335   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.339345   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:48.339353   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:48.339412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:48.377248   70152 cri.go:89] found id: ""
	I0924 19:48:48.377277   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.377286   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:48.377292   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:48.377354   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:48.414199   70152 cri.go:89] found id: ""
	I0924 19:48:48.414230   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.414238   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:48.414244   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:48.414293   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:48.446262   70152 cri.go:89] found id: ""
	I0924 19:48:48.446291   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.446302   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:48.446309   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:48.446369   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:48.477125   70152 cri.go:89] found id: ""
	I0924 19:48:48.477155   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.477166   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:48.477174   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:48.477233   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:48.520836   70152 cri.go:89] found id: ""
	I0924 19:48:48.520867   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.520876   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:48.520881   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:48.520936   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:48.557787   70152 cri.go:89] found id: ""
	I0924 19:48:48.557818   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.557829   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:48.557838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:48.557897   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:48.589636   70152 cri.go:89] found id: ""
	I0924 19:48:48.589670   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.589682   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:48.589692   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:48.589706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:48.667455   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:48.667486   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:48.704523   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:48.704559   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:48.754194   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:48.754223   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:48.766550   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:48.766576   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:48.833394   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:51.333900   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:51.347028   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:51.347094   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:51.383250   70152 cri.go:89] found id: ""
	I0924 19:48:51.383277   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.383285   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:51.383292   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:51.383356   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:51.415238   70152 cri.go:89] found id: ""
	I0924 19:48:51.415269   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.415282   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:51.415289   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:51.415349   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:51.447358   70152 cri.go:89] found id: ""
	I0924 19:48:51.447388   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.447398   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:51.447407   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:51.447469   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:51.479317   70152 cri.go:89] found id: ""
	I0924 19:48:51.479345   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.479354   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:51.479362   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:51.479423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:51.511976   70152 cri.go:89] found id: ""
	I0924 19:48:51.512008   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.512016   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:51.512022   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:51.512074   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:51.544785   70152 cri.go:89] found id: ""
	I0924 19:48:51.544816   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.544824   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:51.544834   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:51.544896   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:51.577475   70152 cri.go:89] found id: ""
	I0924 19:48:51.577508   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.577519   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:51.577527   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:51.577599   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:51.612499   70152 cri.go:89] found id: ""
	I0924 19:48:51.612529   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.612540   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:51.612551   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:51.612564   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:51.648429   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:51.648456   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:51.699980   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:51.700010   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:51.714695   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:51.714723   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:51.781872   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:51.781894   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:51.781909   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:54.361191   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:54.373189   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:54.373242   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:54.405816   70152 cri.go:89] found id: ""
	I0924 19:48:54.405844   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.405854   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:54.405862   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:54.405924   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:54.437907   70152 cri.go:89] found id: ""
	I0924 19:48:54.437935   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.437945   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:54.437952   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:54.438013   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:54.472020   70152 cri.go:89] found id: ""
	I0924 19:48:54.472042   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.472054   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:54.472061   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:54.472122   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:54.507185   70152 cri.go:89] found id: ""
	I0924 19:48:54.507206   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.507215   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:54.507220   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:54.507269   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:54.540854   70152 cri.go:89] found id: ""
	I0924 19:48:54.540887   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.540898   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:54.540905   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:54.540973   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:54.572764   70152 cri.go:89] found id: ""
	I0924 19:48:54.572805   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.572816   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:54.572824   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:54.572897   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:54.605525   70152 cri.go:89] found id: ""
	I0924 19:48:54.605565   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.605573   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:54.605579   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:54.605652   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:54.637320   70152 cri.go:89] found id: ""
	I0924 19:48:54.637341   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.637350   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:54.637357   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:54.637367   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:54.691398   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:54.691433   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:54.704780   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:54.704805   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:54.779461   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:54.779487   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:54.779502   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:54.858131   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:54.858168   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:57.393677   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:57.406202   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:57.406273   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:57.439351   70152 cri.go:89] found id: ""
	I0924 19:48:57.439381   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.439388   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:57.439394   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:57.439440   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:57.476966   70152 cri.go:89] found id: ""
	I0924 19:48:57.476993   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.477002   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:57.477007   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:57.477064   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:57.510947   70152 cri.go:89] found id: ""
	I0924 19:48:57.510975   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.510986   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:57.510994   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:57.511054   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:57.544252   70152 cri.go:89] found id: ""
	I0924 19:48:57.544277   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.544285   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:57.544292   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:57.544342   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:57.576781   70152 cri.go:89] found id: ""
	I0924 19:48:57.576810   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.576821   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:57.576829   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:57.576892   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:57.614243   70152 cri.go:89] found id: ""
	I0924 19:48:57.614269   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.614277   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:57.614283   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:57.614349   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:57.653477   70152 cri.go:89] found id: ""
	I0924 19:48:57.653506   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.653517   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:57.653524   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:57.653598   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:57.701253   70152 cri.go:89] found id: ""
	I0924 19:48:57.701283   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.701291   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:57.701299   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:57.701311   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:57.721210   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:57.721239   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:57.799693   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:57.799720   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:57.799735   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:57.881561   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:57.881597   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:57.917473   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:57.917506   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:00.471475   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:00.485727   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:00.485801   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:00.518443   70152 cri.go:89] found id: ""
	I0924 19:49:00.518472   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.518483   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:00.518490   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:00.518555   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:00.553964   70152 cri.go:89] found id: ""
	I0924 19:49:00.553991   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.554001   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:00.554009   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:00.554074   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:00.585507   70152 cri.go:89] found id: ""
	I0924 19:49:00.585537   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.585548   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:00.585555   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:00.585614   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:00.618214   70152 cri.go:89] found id: ""
	I0924 19:49:00.618242   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.618253   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:00.618260   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:00.618319   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:00.649042   70152 cri.go:89] found id: ""
	I0924 19:49:00.649069   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.649077   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:00.649083   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:00.649133   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:00.681021   70152 cri.go:89] found id: ""
	I0924 19:49:00.681050   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.681060   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:00.681067   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:00.681128   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:00.712608   70152 cri.go:89] found id: ""
	I0924 19:49:00.712631   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.712640   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:00.712646   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:00.712693   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:00.744523   70152 cri.go:89] found id: ""
	I0924 19:49:00.744561   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.744572   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:00.744584   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:00.744604   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:00.757179   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:00.757202   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:00.822163   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:00.822186   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:00.822197   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:00.897080   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:00.897125   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:00.934120   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:00.934149   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:03.487555   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:03.500318   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:03.500372   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:03.531327   70152 cri.go:89] found id: ""
	I0924 19:49:03.531355   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.531364   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:03.531372   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:03.531437   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:03.563445   70152 cri.go:89] found id: ""
	I0924 19:49:03.563480   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.563491   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:03.563498   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:03.563564   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:03.602093   70152 cri.go:89] found id: ""
	I0924 19:49:03.602118   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.602126   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:03.602134   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:03.602184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:03.633729   70152 cri.go:89] found id: ""
	I0924 19:49:03.633758   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.633769   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:03.633777   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:03.633838   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:03.664122   70152 cri.go:89] found id: ""
	I0924 19:49:03.664144   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.664154   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:03.664162   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:03.664227   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:03.697619   70152 cri.go:89] found id: ""
	I0924 19:49:03.697647   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.697656   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:03.697661   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:03.697714   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:03.729679   70152 cri.go:89] found id: ""
	I0924 19:49:03.729706   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.729714   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:03.729719   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:03.729768   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:03.760459   70152 cri.go:89] found id: ""
	I0924 19:49:03.760489   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.760497   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:03.760505   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:03.760517   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:03.772452   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:03.772475   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:03.836658   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:03.836690   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:03.836706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:03.911243   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:03.911274   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:03.947676   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:03.947699   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:06.501947   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:06.513963   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:06.514037   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:06.546355   70152 cri.go:89] found id: ""
	I0924 19:49:06.546382   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.546393   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:06.546401   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:06.546460   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:06.577502   70152 cri.go:89] found id: ""
	I0924 19:49:06.577530   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.577542   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:06.577554   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:06.577606   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:06.611622   70152 cri.go:89] found id: ""
	I0924 19:49:06.611644   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.611652   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:06.611658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:06.611716   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:06.646558   70152 cri.go:89] found id: ""
	I0924 19:49:06.646581   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.646589   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:06.646594   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:06.646656   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:06.678247   70152 cri.go:89] found id: ""
	I0924 19:49:06.678271   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.678282   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:06.678289   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:06.678351   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:06.718816   70152 cri.go:89] found id: ""
	I0924 19:49:06.718861   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.718874   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:06.718889   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:06.718952   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:06.751762   70152 cri.go:89] found id: ""
	I0924 19:49:06.751787   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.751798   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:06.751806   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:06.751867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:06.783466   70152 cri.go:89] found id: ""
	I0924 19:49:06.783494   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.783502   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:06.783511   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:06.783523   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:06.796746   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:06.796773   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:06.860579   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:06.860608   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:06.860627   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:06.933363   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:06.933394   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:06.973189   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:06.973214   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:09.525823   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:09.537933   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:09.537986   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:09.568463   70152 cri.go:89] found id: ""
	I0924 19:49:09.568492   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.568503   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:09.568511   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:09.568566   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:09.598218   70152 cri.go:89] found id: ""
	I0924 19:49:09.598250   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.598261   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:09.598268   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:09.598325   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:09.631792   70152 cri.go:89] found id: ""
	I0924 19:49:09.631817   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.631828   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:09.631839   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:09.631906   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:09.668544   70152 cri.go:89] found id: ""
	I0924 19:49:09.668578   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.668586   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:09.668592   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:09.668643   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:09.699088   70152 cri.go:89] found id: ""
	I0924 19:49:09.699117   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.699126   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:09.699132   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:09.699192   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:09.731239   70152 cri.go:89] found id: ""
	I0924 19:49:09.731262   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.731273   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:09.731280   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:09.731341   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:09.764349   70152 cri.go:89] found id: ""
	I0924 19:49:09.764372   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.764380   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:09.764386   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:09.764443   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:09.795675   70152 cri.go:89] found id: ""
	I0924 19:49:09.795698   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.795707   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:09.795715   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:09.795733   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:09.829109   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:09.829133   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:09.882630   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:09.882666   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:09.894968   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:09.894992   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:09.955378   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:09.955400   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:09.955415   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:12.537431   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:12.549816   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:12.549878   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:12.585422   70152 cri.go:89] found id: ""
	I0924 19:49:12.585445   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.585453   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:12.585459   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:12.585505   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:12.621367   70152 cri.go:89] found id: ""
	I0924 19:49:12.621391   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.621401   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:12.621408   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:12.621471   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:12.656570   70152 cri.go:89] found id: ""
	I0924 19:49:12.656596   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.656603   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:12.656611   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:12.656671   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:12.691193   70152 cri.go:89] found id: ""
	I0924 19:49:12.691215   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.691225   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:12.691233   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:12.691291   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:12.725507   70152 cri.go:89] found id: ""
	I0924 19:49:12.725535   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.725546   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:12.725554   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:12.725614   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:12.757046   70152 cri.go:89] found id: ""
	I0924 19:49:12.757072   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.757083   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:12.757091   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:12.757148   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:12.787049   70152 cri.go:89] found id: ""
	I0924 19:49:12.787075   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.787083   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:12.787088   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:12.787136   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:12.820797   70152 cri.go:89] found id: ""
	I0924 19:49:12.820823   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.820831   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:12.820841   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:12.820859   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:12.873430   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:12.873462   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:12.886207   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:12.886234   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:12.957602   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:12.957623   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:12.957637   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:13.034776   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:13.034811   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:15.571177   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:15.583916   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:15.583981   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:15.618698   70152 cri.go:89] found id: ""
	I0924 19:49:15.618722   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.618730   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:15.618735   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:15.618787   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:15.653693   70152 cri.go:89] found id: ""
	I0924 19:49:15.653726   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.653747   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:15.653755   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:15.653817   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:15.683926   70152 cri.go:89] found id: ""
	I0924 19:49:15.683955   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.683966   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:15.683974   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:15.684031   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:15.718671   70152 cri.go:89] found id: ""
	I0924 19:49:15.718704   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.718716   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:15.718724   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:15.718784   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:15.748861   70152 cri.go:89] found id: ""
	I0924 19:49:15.748892   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.748904   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:15.748911   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:15.748985   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:15.778209   70152 cri.go:89] found id: ""
	I0924 19:49:15.778241   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.778252   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:15.778259   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:15.778323   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:15.808159   70152 cri.go:89] found id: ""
	I0924 19:49:15.808184   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.808192   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:15.808197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:15.808257   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:15.840960   70152 cri.go:89] found id: ""
	I0924 19:49:15.840987   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.840995   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:15.841003   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:15.841016   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:15.891229   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:15.891259   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:15.903910   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:15.903935   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:15.967036   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:15.967061   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:15.967074   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:16.046511   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:16.046545   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:18.586369   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:18.598590   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:18.598680   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:18.631438   70152 cri.go:89] found id: ""
	I0924 19:49:18.631465   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.631476   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:18.631484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:18.631545   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:18.663461   70152 cri.go:89] found id: ""
	I0924 19:49:18.663484   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.663491   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:18.663497   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:18.663556   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:18.696292   70152 cri.go:89] found id: ""
	I0924 19:49:18.696373   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.696398   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:18.696411   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:18.696475   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:18.728037   70152 cri.go:89] found id: ""
	I0924 19:49:18.728062   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.728073   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:18.728079   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:18.728139   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:18.759784   70152 cri.go:89] found id: ""
	I0924 19:49:18.759819   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.759830   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:18.759838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:18.759902   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:18.791856   70152 cri.go:89] found id: ""
	I0924 19:49:18.791886   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.791893   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:18.791899   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:18.791959   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:18.822678   70152 cri.go:89] found id: ""
	I0924 19:49:18.822708   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.822719   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:18.822730   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:18.822794   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:18.852967   70152 cri.go:89] found id: ""
	I0924 19:49:18.852988   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.852996   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:18.853005   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:18.853016   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:18.902600   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:18.902634   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:18.915475   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:18.915505   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:18.980260   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:18.980285   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:18.980299   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:19.064950   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:19.064986   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:21.603752   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:21.616039   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:21.616107   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:21.648228   70152 cri.go:89] found id: ""
	I0924 19:49:21.648253   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.648266   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:21.648274   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:21.648331   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:21.679823   70152 cri.go:89] found id: ""
	I0924 19:49:21.679850   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.679858   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:21.679866   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:21.679928   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:21.712860   70152 cri.go:89] found id: ""
	I0924 19:49:21.712886   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.712895   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:21.712900   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:21.712951   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:21.749711   70152 cri.go:89] found id: ""
	I0924 19:49:21.749735   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.749742   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:21.749748   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:21.749793   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:21.784536   70152 cri.go:89] found id: ""
	I0924 19:49:21.784559   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.784567   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:21.784573   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:21.784631   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:21.813864   70152 cri.go:89] found id: ""
	I0924 19:49:21.813896   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.813907   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:21.813916   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:21.813981   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:21.843610   70152 cri.go:89] found id: ""
	I0924 19:49:21.843639   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.843647   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:21.843653   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:21.843704   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:21.874367   70152 cri.go:89] found id: ""
	I0924 19:49:21.874393   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.874401   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:21.874410   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:21.874421   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:21.923539   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:21.923567   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:21.936994   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:21.937018   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:22.004243   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:22.004264   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:22.004277   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:22.079890   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:22.079921   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:24.616140   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:24.628197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:24.628257   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:24.660873   70152 cri.go:89] found id: ""
	I0924 19:49:24.660902   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.660912   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:24.660919   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:24.660978   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:24.691592   70152 cri.go:89] found id: ""
	I0924 19:49:24.691618   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.691627   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:24.691633   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:24.691682   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:24.725803   70152 cri.go:89] found id: ""
	I0924 19:49:24.725835   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.725843   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:24.725849   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:24.725911   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:24.760080   70152 cri.go:89] found id: ""
	I0924 19:49:24.760112   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.760124   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:24.760131   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:24.760198   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:24.792487   70152 cri.go:89] found id: ""
	I0924 19:49:24.792517   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.792527   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:24.792535   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:24.792615   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:24.825037   70152 cri.go:89] found id: ""
	I0924 19:49:24.825058   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.825066   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:24.825072   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:24.825117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:24.857009   70152 cri.go:89] found id: ""
	I0924 19:49:24.857037   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.857048   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:24.857062   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:24.857119   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:24.887963   70152 cri.go:89] found id: ""
	I0924 19:49:24.887986   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.887994   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:24.888001   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:24.888012   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:24.941971   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:24.942008   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:24.956355   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:24.956385   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:25.020643   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:25.020671   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:25.020686   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:25.095261   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:25.095295   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:27.632228   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:27.645002   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:27.645059   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:27.677386   70152 cri.go:89] found id: ""
	I0924 19:49:27.677411   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.677420   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:27.677427   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:27.677487   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:27.709731   70152 cri.go:89] found id: ""
	I0924 19:49:27.709760   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.709771   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:27.709779   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:27.709846   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:27.741065   70152 cri.go:89] found id: ""
	I0924 19:49:27.741092   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.741100   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:27.741106   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:27.741165   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:27.771493   70152 cri.go:89] found id: ""
	I0924 19:49:27.771515   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.771524   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:27.771531   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:27.771592   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:27.803233   70152 cri.go:89] found id: ""
	I0924 19:49:27.803266   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.803273   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:27.803279   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:27.803341   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:27.837295   70152 cri.go:89] found id: ""
	I0924 19:49:27.837320   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.837331   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:27.837341   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:27.837412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:27.867289   70152 cri.go:89] found id: ""
	I0924 19:49:27.867314   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.867323   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:27.867328   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:27.867374   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:27.896590   70152 cri.go:89] found id: ""
	I0924 19:49:27.896615   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.896623   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:27.896634   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:27.896646   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:27.944564   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:27.944596   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:27.958719   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:27.958740   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:28.028986   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:28.029011   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:28.029027   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:28.103888   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:28.103920   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:30.639148   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:30.651500   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:30.651570   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:30.689449   70152 cri.go:89] found id: ""
	I0924 19:49:30.689472   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.689481   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:30.689488   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:30.689566   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:30.722953   70152 cri.go:89] found id: ""
	I0924 19:49:30.722982   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.722993   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:30.723004   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:30.723057   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:30.760960   70152 cri.go:89] found id: ""
	I0924 19:49:30.760985   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.760996   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:30.761004   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:30.761066   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:30.794784   70152 cri.go:89] found id: ""
	I0924 19:49:30.794812   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.794821   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:30.794842   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:30.794894   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:30.826127   70152 cri.go:89] found id: ""
	I0924 19:49:30.826155   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.826164   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:30.826172   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:30.826235   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:30.857392   70152 cri.go:89] found id: ""
	I0924 19:49:30.857422   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.857432   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:30.857446   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:30.857510   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:30.887561   70152 cri.go:89] found id: ""
	I0924 19:49:30.887588   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.887600   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:30.887622   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:30.887692   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:30.922486   70152 cri.go:89] found id: ""
	I0924 19:49:30.922514   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.922526   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:30.922537   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:30.922551   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:30.972454   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:30.972480   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:30.986873   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:30.986895   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:31.060505   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:31.060525   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:31.060544   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:31.138923   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:31.138955   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:33.674979   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:33.687073   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:33.687149   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:33.719712   70152 cri.go:89] found id: ""
	I0924 19:49:33.719742   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.719751   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:33.719757   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:33.719810   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:33.751183   70152 cri.go:89] found id: ""
	I0924 19:49:33.751210   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.751221   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:33.751229   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:33.751274   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:33.781748   70152 cri.go:89] found id: ""
	I0924 19:49:33.781781   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.781793   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:33.781801   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:33.781873   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:33.813287   70152 cri.go:89] found id: ""
	I0924 19:49:33.813311   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.813319   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:33.813324   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:33.813369   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:33.848270   70152 cri.go:89] found id: ""
	I0924 19:49:33.848299   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.848311   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:33.848319   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:33.848383   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:33.877790   70152 cri.go:89] found id: ""
	I0924 19:49:33.877817   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.877826   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:33.877832   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:33.877890   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:33.911668   70152 cri.go:89] found id: ""
	I0924 19:49:33.911693   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.911701   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:33.911706   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:33.911759   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:33.943924   70152 cri.go:89] found id: ""
	I0924 19:49:33.943952   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.943963   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:33.943974   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:33.943987   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:33.980520   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:33.980560   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:34.031240   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:34.031275   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:34.044180   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:34.044210   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:34.110143   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:34.110165   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:34.110176   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:36.694093   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:36.706006   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:36.706080   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:36.738955   70152 cri.go:89] found id: ""
	I0924 19:49:36.738981   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.738990   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:36.738995   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:36.739059   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:36.774414   70152 cri.go:89] found id: ""
	I0924 19:49:36.774437   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.774445   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:36.774451   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:36.774503   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:36.805821   70152 cri.go:89] found id: ""
	I0924 19:49:36.805851   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.805861   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:36.805867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:36.805922   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:36.835128   70152 cri.go:89] found id: ""
	I0924 19:49:36.835154   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.835162   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:36.835168   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:36.835221   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:36.865448   70152 cri.go:89] found id: ""
	I0924 19:49:36.865474   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.865485   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:36.865492   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:36.865552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:36.896694   70152 cri.go:89] found id: ""
	I0924 19:49:36.896722   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.896731   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:36.896736   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:36.896801   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:36.927380   70152 cri.go:89] found id: ""
	I0924 19:49:36.927406   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.927416   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:36.927426   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:36.927484   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:36.957581   70152 cri.go:89] found id: ""
	I0924 19:49:36.957604   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.957614   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:36.957624   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:36.957638   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:37.007182   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:37.007211   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:37.021536   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:37.021561   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:37.092442   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:37.092465   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:37.092477   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:37.167488   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:37.167524   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:39.703778   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:39.715914   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:39.715983   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:39.751296   70152 cri.go:89] found id: ""
	I0924 19:49:39.751319   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.751329   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:39.751341   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:39.751409   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:39.787095   70152 cri.go:89] found id: ""
	I0924 19:49:39.787123   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.787132   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:39.787137   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:39.787184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:39.822142   70152 cri.go:89] found id: ""
	I0924 19:49:39.822164   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.822173   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:39.822179   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:39.822226   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:39.853830   70152 cri.go:89] found id: ""
	I0924 19:49:39.853854   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.853864   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:39.853871   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:39.853932   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:39.891029   70152 cri.go:89] found id: ""
	I0924 19:49:39.891079   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.891091   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:39.891100   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:39.891162   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:39.926162   70152 cri.go:89] found id: ""
	I0924 19:49:39.926194   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.926204   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:39.926211   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:39.926262   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:39.964320   70152 cri.go:89] found id: ""
	I0924 19:49:39.964348   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.964358   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:39.964365   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:39.964421   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:39.997596   70152 cri.go:89] found id: ""
	I0924 19:49:39.997617   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.997627   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:39.997636   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:39.997649   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:40.045538   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:40.045568   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:40.058114   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:40.058139   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:40.125927   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:40.125946   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:40.125958   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:40.202722   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:40.202758   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:42.742707   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:42.754910   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:42.754986   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:42.788775   70152 cri.go:89] found id: ""
	I0924 19:49:42.788798   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.788807   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:42.788813   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:42.788875   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:42.824396   70152 cri.go:89] found id: ""
	I0924 19:49:42.824420   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.824430   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:42.824436   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:42.824498   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:42.854848   70152 cri.go:89] found id: ""
	I0924 19:49:42.854873   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.854880   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:42.854886   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:42.854936   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:42.885033   70152 cri.go:89] found id: ""
	I0924 19:49:42.885056   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.885063   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:42.885069   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:42.885114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:42.914427   70152 cri.go:89] found id: ""
	I0924 19:49:42.914451   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.914458   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:42.914464   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:42.914509   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:42.954444   70152 cri.go:89] found id: ""
	I0924 19:49:42.954471   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.954481   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:42.954488   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:42.954544   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:42.998183   70152 cri.go:89] found id: ""
	I0924 19:49:42.998207   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.998215   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:42.998220   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:42.998273   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:43.041904   70152 cri.go:89] found id: ""
	I0924 19:49:43.041933   70152 logs.go:276] 0 containers: []
	W0924 19:49:43.041944   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:43.041957   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:43.041973   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:43.091733   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:43.091770   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:43.104674   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:43.104707   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:43.169712   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:43.169732   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:43.169745   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:43.248378   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:43.248409   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:45.790015   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:45.801902   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:45.801972   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:45.833030   70152 cri.go:89] found id: ""
	I0924 19:49:45.833053   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.833061   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:45.833066   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:45.833117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:45.863209   70152 cri.go:89] found id: ""
	I0924 19:49:45.863233   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.863241   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:45.863247   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:45.863307   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:45.893004   70152 cri.go:89] found id: ""
	I0924 19:49:45.893035   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.893045   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:45.893053   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:45.893114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:45.924485   70152 cri.go:89] found id: ""
	I0924 19:49:45.924515   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.924527   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:45.924535   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:45.924593   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:45.956880   70152 cri.go:89] found id: ""
	I0924 19:49:45.956907   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.956914   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:45.956919   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:45.956967   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:45.990579   70152 cri.go:89] found id: ""
	I0924 19:49:45.990602   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.990614   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:45.990622   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:45.990677   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:46.025905   70152 cri.go:89] found id: ""
	I0924 19:49:46.025944   70152 logs.go:276] 0 containers: []
	W0924 19:49:46.025959   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:46.025966   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:46.026028   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:46.057401   70152 cri.go:89] found id: ""
	I0924 19:49:46.057427   70152 logs.go:276] 0 containers: []
	W0924 19:49:46.057438   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:46.057449   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:46.057463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:46.107081   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:46.107115   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:46.121398   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:46.121426   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:46.184370   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:46.184395   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:46.184410   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:46.266061   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:46.266104   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:48.803970   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:48.816671   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:48.816737   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:48.849566   70152 cri.go:89] found id: ""
	I0924 19:49:48.849628   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.849652   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:48.849660   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:48.849720   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:48.885963   70152 cri.go:89] found id: ""
	I0924 19:49:48.885992   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.885999   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:48.886004   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:48.886054   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:48.921710   70152 cri.go:89] found id: ""
	I0924 19:49:48.921744   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.921755   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:48.921765   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:48.921821   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:48.954602   70152 cri.go:89] found id: ""
	I0924 19:49:48.954639   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.954650   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:48.954658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:48.954718   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:48.988071   70152 cri.go:89] found id: ""
	I0924 19:49:48.988098   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.988109   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:48.988117   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:48.988177   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:49.020475   70152 cri.go:89] found id: ""
	I0924 19:49:49.020503   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.020512   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:49.020519   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:49.020597   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:49.055890   70152 cri.go:89] found id: ""
	I0924 19:49:49.055915   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.055925   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:49.055933   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:49.055999   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:49.092976   70152 cri.go:89] found id: ""
	I0924 19:49:49.093010   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.093022   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:49.093033   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:49.093051   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:49.106598   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:49.106623   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:49.175320   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:49.175349   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:49.175362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:49.252922   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:49.252953   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:49.292364   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:49.292391   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:51.843520   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:51.855864   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:51.855930   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:51.885300   70152 cri.go:89] found id: ""
	I0924 19:49:51.885329   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.885342   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:51.885350   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:51.885407   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:51.915183   70152 cri.go:89] found id: ""
	I0924 19:49:51.915212   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.915223   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:51.915230   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:51.915286   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:51.944774   70152 cri.go:89] found id: ""
	I0924 19:49:51.944797   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.944807   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:51.944815   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:51.944886   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:51.983691   70152 cri.go:89] found id: ""
	I0924 19:49:51.983718   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.983729   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:51.983737   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:51.983791   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:52.019728   70152 cri.go:89] found id: ""
	I0924 19:49:52.019760   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.019770   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:52.019776   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:52.019835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:52.055405   70152 cri.go:89] found id: ""
	I0924 19:49:52.055435   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.055446   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:52.055453   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:52.055518   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:52.088417   70152 cri.go:89] found id: ""
	I0924 19:49:52.088447   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.088457   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:52.088465   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:52.088527   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:52.119496   70152 cri.go:89] found id: ""
	I0924 19:49:52.119527   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.119539   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:52.119550   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:52.119563   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:52.193494   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:52.193529   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:52.231440   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:52.231464   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:52.281384   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:52.281418   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:52.293893   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:52.293919   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:52.362404   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:54.863156   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:54.876871   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:54.876946   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:54.909444   70152 cri.go:89] found id: ""
	I0924 19:49:54.909471   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.909478   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:54.909484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:54.909536   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:54.939687   70152 cri.go:89] found id: ""
	I0924 19:49:54.939715   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.939726   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:54.939733   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:54.939805   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:54.971156   70152 cri.go:89] found id: ""
	I0924 19:49:54.971180   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.971188   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:54.971193   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:54.971244   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:55.001865   70152 cri.go:89] found id: ""
	I0924 19:49:55.001891   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.001899   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:55.001904   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:55.001961   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:55.032044   70152 cri.go:89] found id: ""
	I0924 19:49:55.032072   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.032084   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:55.032092   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:55.032152   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:55.061644   70152 cri.go:89] found id: ""
	I0924 19:49:55.061667   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.061675   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:55.061681   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:55.061727   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:55.093015   70152 cri.go:89] found id: ""
	I0924 19:49:55.093041   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.093049   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:55.093055   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:55.093121   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:55.126041   70152 cri.go:89] found id: ""
	I0924 19:49:55.126065   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.126073   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:55.126081   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:55.126091   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:55.168803   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:55.168826   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:55.227121   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:55.227158   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:55.249868   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:55.249893   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:55.316401   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:55.316422   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:55.316434   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:57.898654   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:57.910667   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:57.910728   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:57.942696   70152 cri.go:89] found id: ""
	I0924 19:49:57.942722   70152 logs.go:276] 0 containers: []
	W0924 19:49:57.942730   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:57.942736   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:57.942802   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:57.981222   70152 cri.go:89] found id: ""
	I0924 19:49:57.981244   70152 logs.go:276] 0 containers: []
	W0924 19:49:57.981254   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:57.981261   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:57.981308   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:58.013135   70152 cri.go:89] found id: ""
	I0924 19:49:58.013174   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.013185   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:58.013193   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:58.013255   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:58.048815   70152 cri.go:89] found id: ""
	I0924 19:49:58.048847   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.048859   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:58.048867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:58.048933   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:58.081365   70152 cri.go:89] found id: ""
	I0924 19:49:58.081395   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.081406   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:58.081413   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:58.081478   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:58.112804   70152 cri.go:89] found id: ""
	I0924 19:49:58.112828   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.112838   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:58.112848   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:58.112913   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:58.147412   70152 cri.go:89] found id: ""
	I0924 19:49:58.147448   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.147459   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:58.147467   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:58.147529   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:58.178922   70152 cri.go:89] found id: ""
	I0924 19:49:58.178952   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.178963   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:58.178974   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:58.178993   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:58.250967   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:58.250993   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:58.251011   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:58.329734   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:58.329767   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:58.366692   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:58.366722   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:58.418466   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:58.418503   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:00.931624   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:00.949687   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:00.949756   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:01.004428   70152 cri.go:89] found id: ""
	I0924 19:50:01.004456   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.004464   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:01.004471   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:01.004532   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:01.038024   70152 cri.go:89] found id: ""
	I0924 19:50:01.038050   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.038060   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:01.038065   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:01.038111   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:01.069831   70152 cri.go:89] found id: ""
	I0924 19:50:01.069855   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.069862   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:01.069867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:01.069933   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:01.100918   70152 cri.go:89] found id: ""
	I0924 19:50:01.100944   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.100951   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:01.100957   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:01.101006   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:01.131309   70152 cri.go:89] found id: ""
	I0924 19:50:01.131340   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.131351   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:01.131359   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:01.131419   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:01.161779   70152 cri.go:89] found id: ""
	I0924 19:50:01.161806   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.161817   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:01.161825   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:01.161888   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:01.196626   70152 cri.go:89] found id: ""
	I0924 19:50:01.196655   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.196665   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:01.196672   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:01.196733   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:01.226447   70152 cri.go:89] found id: ""
	I0924 19:50:01.226475   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.226486   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:01.226496   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:01.226510   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:01.279093   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:01.279121   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:01.292435   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:01.292463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:01.360868   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:01.360901   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:01.360917   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:01.442988   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:01.443021   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:03.984021   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:03.997429   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:03.997508   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:04.030344   70152 cri.go:89] found id: ""
	I0924 19:50:04.030374   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.030387   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:04.030395   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:04.030448   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:04.063968   70152 cri.go:89] found id: ""
	I0924 19:50:04.064003   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.064016   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:04.064023   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:04.064083   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:04.097724   70152 cri.go:89] found id: ""
	I0924 19:50:04.097752   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.097764   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:04.097772   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:04.097825   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:04.129533   70152 cri.go:89] found id: ""
	I0924 19:50:04.129570   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.129580   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:04.129588   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:04.129665   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:04.166056   70152 cri.go:89] found id: ""
	I0924 19:50:04.166086   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.166098   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:04.166105   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:04.166164   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:04.200051   70152 cri.go:89] found id: ""
	I0924 19:50:04.200077   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.200087   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:04.200094   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:04.200205   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:04.232647   70152 cri.go:89] found id: ""
	I0924 19:50:04.232671   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.232679   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:04.232686   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:04.232744   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:04.264091   70152 cri.go:89] found id: ""
	I0924 19:50:04.264115   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.264123   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:04.264131   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:04.264140   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:04.313904   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:04.313939   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:04.326759   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:04.326782   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:04.390347   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:04.390372   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:04.390389   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:04.470473   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:04.470509   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:07.009267   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:07.022465   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:07.022534   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:07.053438   70152 cri.go:89] found id: ""
	I0924 19:50:07.053466   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.053476   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:07.053484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:07.053552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:07.085802   70152 cri.go:89] found id: ""
	I0924 19:50:07.085824   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.085833   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:07.085840   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:07.085903   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:07.121020   70152 cri.go:89] found id: ""
	I0924 19:50:07.121043   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.121051   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:07.121056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:07.121108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:07.150529   70152 cri.go:89] found id: ""
	I0924 19:50:07.150557   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.150568   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:07.150576   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:07.150663   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:07.181915   70152 cri.go:89] found id: ""
	I0924 19:50:07.181942   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.181953   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:07.181959   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:07.182021   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:07.215152   70152 cri.go:89] found id: ""
	I0924 19:50:07.215185   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.215195   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:07.215203   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:07.215263   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:07.248336   70152 cri.go:89] found id: ""
	I0924 19:50:07.248365   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.248373   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:07.248378   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:07.248423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:07.281829   70152 cri.go:89] found id: ""
	I0924 19:50:07.281854   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.281862   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:07.281871   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:07.281885   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:07.329674   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:07.329706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:07.342257   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:07.342283   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:07.406426   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:07.406452   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:07.406466   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:07.493765   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:07.493796   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:10.033393   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:10.046435   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:10.046513   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:10.077993   70152 cri.go:89] found id: ""
	I0924 19:50:10.078024   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.078034   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:10.078044   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:10.078108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:10.115200   70152 cri.go:89] found id: ""
	I0924 19:50:10.115232   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.115243   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:10.115251   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:10.115317   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:10.151154   70152 cri.go:89] found id: ""
	I0924 19:50:10.151179   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.151189   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:10.151197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:10.151254   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:10.184177   70152 cri.go:89] found id: ""
	I0924 19:50:10.184204   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.184212   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:10.184218   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:10.184268   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:10.218932   70152 cri.go:89] found id: ""
	I0924 19:50:10.218962   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.218973   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:10.218981   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:10.219042   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:10.250973   70152 cri.go:89] found id: ""
	I0924 19:50:10.251001   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.251012   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:10.251020   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:10.251076   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:10.280296   70152 cri.go:89] found id: ""
	I0924 19:50:10.280319   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.280328   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:10.280333   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:10.280385   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:10.312386   70152 cri.go:89] found id: ""
	I0924 19:50:10.312411   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.312419   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:10.312426   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:10.312437   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:10.377281   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:10.377309   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:10.377326   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:10.451806   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:10.451839   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:10.489154   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:10.489184   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:10.536203   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:10.536233   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:13.049785   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:13.062642   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:13.062720   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:13.096627   70152 cri.go:89] found id: ""
	I0924 19:50:13.096658   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.096669   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:13.096680   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:13.096743   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:13.127361   70152 cri.go:89] found id: ""
	I0924 19:50:13.127389   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.127400   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:13.127409   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:13.127468   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:13.160081   70152 cri.go:89] found id: ""
	I0924 19:50:13.160111   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.160123   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:13.160131   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:13.160184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:13.192955   70152 cri.go:89] found id: ""
	I0924 19:50:13.192986   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.192997   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:13.193004   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:13.193057   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:13.230978   70152 cri.go:89] found id: ""
	I0924 19:50:13.231000   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.231008   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:13.231014   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:13.231064   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:13.262146   70152 cri.go:89] found id: ""
	I0924 19:50:13.262179   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.262190   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:13.262198   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:13.262258   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:13.297019   70152 cri.go:89] found id: ""
	I0924 19:50:13.297054   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.297063   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:13.297070   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:13.297117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:13.327009   70152 cri.go:89] found id: ""
	I0924 19:50:13.327037   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.327046   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:13.327057   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:13.327073   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:13.375465   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:13.375493   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:13.389851   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:13.389884   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:13.452486   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:13.452524   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:13.452538   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:13.531372   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:13.531405   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:16.066979   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:16.079767   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:16.079825   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:16.110927   70152 cri.go:89] found id: ""
	I0924 19:50:16.110951   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.110960   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:16.110965   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:16.111011   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:16.142012   70152 cri.go:89] found id: ""
	I0924 19:50:16.142040   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.142050   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:16.142055   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:16.142112   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:16.175039   70152 cri.go:89] found id: ""
	I0924 19:50:16.175068   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.175079   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:16.175086   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:16.175146   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:16.206778   70152 cri.go:89] found id: ""
	I0924 19:50:16.206800   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.206808   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:16.206814   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:16.206890   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:16.237724   70152 cri.go:89] found id: ""
	I0924 19:50:16.237752   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.237763   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:16.237770   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:16.237835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:16.268823   70152 cri.go:89] found id: ""
	I0924 19:50:16.268846   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.268855   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:16.268861   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:16.268931   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:16.301548   70152 cri.go:89] found id: ""
	I0924 19:50:16.301570   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.301578   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:16.301584   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:16.301635   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:16.334781   70152 cri.go:89] found id: ""
	I0924 19:50:16.334812   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.334820   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:16.334844   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:16.334864   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:16.384025   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:16.384057   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:16.396528   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:16.396556   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:16.460428   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:16.460458   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:16.460472   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:16.541109   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:16.541146   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:19.078388   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:19.090964   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:19.091052   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:19.122890   70152 cri.go:89] found id: ""
	I0924 19:50:19.122915   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.122923   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:19.122928   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:19.122988   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:19.155983   70152 cri.go:89] found id: ""
	I0924 19:50:19.156013   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.156024   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:19.156031   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:19.156085   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:19.190366   70152 cri.go:89] found id: ""
	I0924 19:50:19.190389   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.190397   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:19.190403   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:19.190459   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:19.221713   70152 cri.go:89] found id: ""
	I0924 19:50:19.221737   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.221745   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:19.221751   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:19.221809   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:19.256586   70152 cri.go:89] found id: ""
	I0924 19:50:19.256615   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.256625   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:19.256637   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:19.256700   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:19.288092   70152 cri.go:89] found id: ""
	I0924 19:50:19.288119   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.288130   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:19.288141   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:19.288204   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:19.320743   70152 cri.go:89] found id: ""
	I0924 19:50:19.320771   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.320780   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:19.320785   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:19.320837   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:19.352967   70152 cri.go:89] found id: ""
	I0924 19:50:19.352999   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.353009   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:19.353019   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:19.353035   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:19.365690   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:19.365715   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:19.431204   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:19.431229   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:19.431244   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:19.512030   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:19.512063   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:19.549631   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:19.549664   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:22.105290   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:22.117532   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:22.117607   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:22.147959   70152 cri.go:89] found id: ""
	I0924 19:50:22.147983   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.147994   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:22.148002   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:22.148060   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:22.178511   70152 cri.go:89] found id: ""
	I0924 19:50:22.178540   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.178551   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:22.178556   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:22.178603   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:22.210030   70152 cri.go:89] found id: ""
	I0924 19:50:22.210054   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.210061   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:22.210067   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:22.210125   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:22.243010   70152 cri.go:89] found id: ""
	I0924 19:50:22.243037   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.243048   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:22.243056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:22.243117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:22.273021   70152 cri.go:89] found id: ""
	I0924 19:50:22.273051   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.273062   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:22.273069   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:22.273133   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:22.303372   70152 cri.go:89] found id: ""
	I0924 19:50:22.303403   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.303415   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:22.303422   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:22.303481   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:22.335124   70152 cri.go:89] found id: ""
	I0924 19:50:22.335150   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.335158   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:22.335164   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:22.335222   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:22.368230   70152 cri.go:89] found id: ""
	I0924 19:50:22.368255   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.368265   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:22.368276   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:22.368290   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:22.418998   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:22.419031   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:22.431654   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:22.431684   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:22.505336   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:22.505354   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:22.505367   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:22.584941   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:22.584976   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:25.127489   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:25.140142   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:25.140216   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:25.169946   70152 cri.go:89] found id: ""
	I0924 19:50:25.169974   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.169982   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:25.169988   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:25.170049   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:25.203298   70152 cri.go:89] found id: ""
	I0924 19:50:25.203328   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.203349   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:25.203357   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:25.203419   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:25.236902   70152 cri.go:89] found id: ""
	I0924 19:50:25.236930   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.236941   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:25.236949   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:25.237011   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:25.268295   70152 cri.go:89] found id: ""
	I0924 19:50:25.268318   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.268328   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:25.268333   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:25.268388   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:25.299869   70152 cri.go:89] found id: ""
	I0924 19:50:25.299899   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.299911   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:25.299919   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:25.299978   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:25.332373   70152 cri.go:89] found id: ""
	I0924 19:50:25.332400   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.332411   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:25.332418   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:25.332477   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:25.365791   70152 cri.go:89] found id: ""
	I0924 19:50:25.365820   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.365831   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:25.365839   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:25.365904   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:25.398170   70152 cri.go:89] found id: ""
	I0924 19:50:25.398193   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.398201   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:25.398209   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:25.398220   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:25.447933   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:25.447967   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:25.461244   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:25.461269   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:25.528100   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:25.528125   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:25.528138   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:25.603029   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:25.603062   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:28.141635   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:28.154551   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:28.154611   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:28.186275   70152 cri.go:89] found id: ""
	I0924 19:50:28.186299   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.186307   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:28.186312   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:28.186371   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:28.218840   70152 cri.go:89] found id: ""
	I0924 19:50:28.218868   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.218879   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:28.218887   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:28.218955   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:28.253478   70152 cri.go:89] found id: ""
	I0924 19:50:28.253503   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.253512   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:28.253519   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:28.253579   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:28.284854   70152 cri.go:89] found id: ""
	I0924 19:50:28.284888   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.284899   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:28.284908   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:28.284959   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:28.315453   70152 cri.go:89] found id: ""
	I0924 19:50:28.315478   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.315487   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:28.315500   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:28.315550   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:28.347455   70152 cri.go:89] found id: ""
	I0924 19:50:28.347484   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.347492   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:28.347498   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:28.347552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:28.383651   70152 cri.go:89] found id: ""
	I0924 19:50:28.383683   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.383694   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:28.383702   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:28.383766   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:28.424649   70152 cri.go:89] found id: ""
	I0924 19:50:28.424682   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.424693   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:28.424704   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:28.424718   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:28.477985   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:28.478020   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:28.490902   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:28.490930   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:28.561252   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:28.561273   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:28.561284   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:28.635590   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:28.635635   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:31.172062   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:31.184868   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:31.184939   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:31.216419   70152 cri.go:89] found id: ""
	I0924 19:50:31.216446   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.216456   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:31.216464   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:31.216525   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:31.252757   70152 cri.go:89] found id: ""
	I0924 19:50:31.252787   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.252797   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:31.252804   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:31.252867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:31.287792   70152 cri.go:89] found id: ""
	I0924 19:50:31.287820   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.287827   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:31.287833   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:31.287883   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:31.322891   70152 cri.go:89] found id: ""
	I0924 19:50:31.322917   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.322927   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:31.322934   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:31.322997   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:31.358353   70152 cri.go:89] found id: ""
	I0924 19:50:31.358384   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.358394   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:31.358401   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:31.358461   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:31.388617   70152 cri.go:89] found id: ""
	I0924 19:50:31.388643   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.388654   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:31.388661   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:31.388714   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:31.421655   70152 cri.go:89] found id: ""
	I0924 19:50:31.421682   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.421690   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:31.421695   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:31.421747   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:31.456995   70152 cri.go:89] found id: ""
	I0924 19:50:31.457020   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.457029   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:31.457037   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:31.457048   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:31.507691   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:31.507725   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:31.521553   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:31.521582   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:31.587673   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:31.587695   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:31.587710   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:31.674153   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:31.674193   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:34.213947   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:34.227779   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:34.227852   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:34.265513   70152 cri.go:89] found id: ""
	I0924 19:50:34.265541   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.265568   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:34.265575   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:34.265632   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:34.305317   70152 cri.go:89] found id: ""
	I0924 19:50:34.305340   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.305348   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:34.305354   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:34.305402   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:34.341144   70152 cri.go:89] found id: ""
	I0924 19:50:34.341168   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.341176   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:34.341183   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:34.341232   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:34.372469   70152 cri.go:89] found id: ""
	I0924 19:50:34.372491   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.372499   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:34.372505   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:34.372551   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:34.408329   70152 cri.go:89] found id: ""
	I0924 19:50:34.408351   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.408360   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:34.408365   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:34.408423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:34.440666   70152 cri.go:89] found id: ""
	I0924 19:50:34.440695   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.440707   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:34.440714   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:34.440782   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:34.475013   70152 cri.go:89] found id: ""
	I0924 19:50:34.475040   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.475047   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:34.475053   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:34.475105   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:34.507051   70152 cri.go:89] found id: ""
	I0924 19:50:34.507077   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.507084   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:34.507092   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:34.507102   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:34.562506   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:34.562549   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:34.575316   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:34.575340   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:34.641903   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:34.641927   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:34.641938   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:34.719868   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:34.719903   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:37.279465   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:37.291991   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:37.292065   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:37.322097   70152 cri.go:89] found id: ""
	I0924 19:50:37.322123   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.322134   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:37.322141   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:37.322199   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:37.353697   70152 cri.go:89] found id: ""
	I0924 19:50:37.353729   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.353740   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:37.353748   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:37.353807   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:37.385622   70152 cri.go:89] found id: ""
	I0924 19:50:37.385653   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.385664   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:37.385672   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:37.385735   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:37.420972   70152 cri.go:89] found id: ""
	I0924 19:50:37.420995   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.421004   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:37.421012   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:37.421070   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:37.451496   70152 cri.go:89] found id: ""
	I0924 19:50:37.451523   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.451534   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:37.451541   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:37.451619   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:37.486954   70152 cri.go:89] found id: ""
	I0924 19:50:37.486982   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.486992   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:37.487000   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:37.487061   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:37.523068   70152 cri.go:89] found id: ""
	I0924 19:50:37.523089   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.523097   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:37.523105   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:37.523165   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:37.559935   70152 cri.go:89] found id: ""
	I0924 19:50:37.559962   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.559970   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:37.559978   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:37.559988   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:37.597976   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:37.598006   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:37.647577   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:37.647610   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:37.660872   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:37.660901   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:37.728264   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:37.728293   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:37.728307   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:40.308026   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:40.320316   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:40.320373   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:40.357099   70152 cri.go:89] found id: ""
	I0924 19:50:40.357127   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.357137   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:40.357145   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:40.357207   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:40.390676   70152 cri.go:89] found id: ""
	I0924 19:50:40.390703   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.390712   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:40.390717   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:40.390766   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:40.422752   70152 cri.go:89] found id: ""
	I0924 19:50:40.422784   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.422796   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:40.422804   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:40.422887   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:40.457024   70152 cri.go:89] found id: ""
	I0924 19:50:40.457046   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.457054   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:40.457059   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:40.457106   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:40.503120   70152 cri.go:89] found id: ""
	I0924 19:50:40.503149   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.503160   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:40.503168   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:40.503225   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:40.543399   70152 cri.go:89] found id: ""
	I0924 19:50:40.543426   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.543435   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:40.543441   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:40.543487   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:40.577654   70152 cri.go:89] found id: ""
	I0924 19:50:40.577679   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.577690   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:40.577698   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:40.577754   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:40.610097   70152 cri.go:89] found id: ""
	I0924 19:50:40.610120   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.610128   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:40.610136   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:40.610145   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:40.661400   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:40.661436   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:40.674254   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:40.674284   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:40.740319   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:40.740342   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:40.740352   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:40.818666   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:40.818704   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:43.356693   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:43.369234   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:43.369295   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:43.407933   70152 cri.go:89] found id: ""
	I0924 19:50:43.407960   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.407971   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:43.407978   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:43.408037   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:43.442923   70152 cri.go:89] found id: ""
	I0924 19:50:43.442956   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.442968   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:43.442979   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:43.443029   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:43.478148   70152 cri.go:89] found id: ""
	I0924 19:50:43.478177   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.478189   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:43.478197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:43.478256   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:43.515029   70152 cri.go:89] found id: ""
	I0924 19:50:43.515060   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.515071   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:43.515079   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:43.515144   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:43.551026   70152 cri.go:89] found id: ""
	I0924 19:50:43.551058   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.551070   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:43.551077   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:43.551140   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:43.587155   70152 cri.go:89] found id: ""
	I0924 19:50:43.587188   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.587197   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:43.587205   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:43.587263   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:43.620935   70152 cri.go:89] found id: ""
	I0924 19:50:43.620958   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.620976   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:43.620984   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:43.621045   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:43.654477   70152 cri.go:89] found id: ""
	I0924 19:50:43.654512   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.654523   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:43.654534   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:43.654546   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:43.689352   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:43.689385   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:43.742646   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:43.742683   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:43.755773   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:43.755798   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:43.818546   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:43.818577   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:43.818595   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:46.397466   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:46.410320   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:46.410392   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:46.443003   70152 cri.go:89] found id: ""
	I0924 19:50:46.443029   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.443041   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:46.443049   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:46.443114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:46.484239   70152 cri.go:89] found id: ""
	I0924 19:50:46.484264   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.484274   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:46.484282   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:46.484339   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:46.519192   70152 cri.go:89] found id: ""
	I0924 19:50:46.519221   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.519230   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:46.519236   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:46.519286   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:46.554588   70152 cri.go:89] found id: ""
	I0924 19:50:46.554611   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.554619   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:46.554626   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:46.554685   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:46.586074   70152 cri.go:89] found id: ""
	I0924 19:50:46.586101   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.586110   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:46.586116   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:46.586167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:46.620119   70152 cri.go:89] found id: ""
	I0924 19:50:46.620149   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.620159   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:46.620166   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:46.620226   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:46.653447   70152 cri.go:89] found id: ""
	I0924 19:50:46.653477   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.653488   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:46.653495   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:46.653557   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:46.686079   70152 cri.go:89] found id: ""
	I0924 19:50:46.686105   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.686116   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:46.686127   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:46.686140   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:46.699847   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:46.699891   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:46.766407   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:46.766432   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:46.766447   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:46.846697   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:46.846730   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:46.901551   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:46.901578   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:49.460047   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:49.473516   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:49.473586   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:49.508180   70152 cri.go:89] found id: ""
	I0924 19:50:49.508211   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.508220   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:49.508226   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:49.508289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:49.540891   70152 cri.go:89] found id: ""
	I0924 19:50:49.540920   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.540928   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:49.540934   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:49.540984   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:49.577008   70152 cri.go:89] found id: ""
	I0924 19:50:49.577038   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.577048   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:49.577054   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:49.577132   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:49.615176   70152 cri.go:89] found id: ""
	I0924 19:50:49.615206   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.615216   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:49.615226   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:49.615289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:49.653135   70152 cri.go:89] found id: ""
	I0924 19:50:49.653167   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.653177   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:49.653184   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:49.653250   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:49.691032   70152 cri.go:89] found id: ""
	I0924 19:50:49.691064   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.691074   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:49.691080   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:49.691143   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:49.725243   70152 cri.go:89] found id: ""
	I0924 19:50:49.725274   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.725287   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:49.725294   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:49.725363   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:49.759288   70152 cri.go:89] found id: ""
	I0924 19:50:49.759316   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.759325   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:49.759333   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:49.759345   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:49.831323   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:49.831345   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:49.831362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:49.907302   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:49.907336   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:49.946386   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:49.946424   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:50.002321   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:50.002362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:52.517380   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:52.531613   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:52.531671   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:52.568158   70152 cri.go:89] found id: ""
	I0924 19:50:52.568188   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.568199   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:52.568207   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:52.568258   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:52.606203   70152 cri.go:89] found id: ""
	I0924 19:50:52.606232   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.606241   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:52.606247   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:52.606307   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:52.647180   70152 cri.go:89] found id: ""
	I0924 19:50:52.647206   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.647218   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:52.647226   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:52.647290   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:52.692260   70152 cri.go:89] found id: ""
	I0924 19:50:52.692289   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.692308   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:52.692316   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:52.692382   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:52.745648   70152 cri.go:89] found id: ""
	I0924 19:50:52.745673   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.745684   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:52.745693   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:52.745759   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:52.782429   70152 cri.go:89] found id: ""
	I0924 19:50:52.782451   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.782458   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:52.782463   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:52.782510   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:52.817286   70152 cri.go:89] found id: ""
	I0924 19:50:52.817312   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.817320   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:52.817326   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:52.817387   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:52.851401   70152 cri.go:89] found id: ""
	I0924 19:50:52.851433   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.851442   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:52.851451   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:52.851463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:52.921634   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:52.921661   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:52.921674   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:53.005676   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:53.005710   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:53.042056   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:53.042092   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:53.092871   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:53.092908   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:55.605865   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:55.618713   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:55.618791   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:55.652326   70152 cri.go:89] found id: ""
	I0924 19:50:55.652354   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.652364   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:55.652372   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:55.652434   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:55.686218   70152 cri.go:89] found id: ""
	I0924 19:50:55.686241   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.686249   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:55.686256   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:55.686318   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:55.718678   70152 cri.go:89] found id: ""
	I0924 19:50:55.718704   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.718713   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:55.718720   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:55.718789   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:55.750122   70152 cri.go:89] found id: ""
	I0924 19:50:55.750149   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.750157   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:55.750163   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:55.750213   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:55.780676   70152 cri.go:89] found id: ""
	I0924 19:50:55.780706   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.780717   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:55.780724   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:55.780806   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:55.814742   70152 cri.go:89] found id: ""
	I0924 19:50:55.814771   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.814783   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:55.814790   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:55.814872   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:55.847599   70152 cri.go:89] found id: ""
	I0924 19:50:55.847624   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.847635   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:55.847643   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:55.847708   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:55.882999   70152 cri.go:89] found id: ""
	I0924 19:50:55.883025   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.883034   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:55.883042   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:55.883053   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:55.948795   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:55.948823   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:55.948840   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:56.032946   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:56.032984   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:56.069628   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:56.069657   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:56.118408   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:56.118444   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:58.631571   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:58.645369   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:58.645437   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:58.679988   70152 cri.go:89] found id: ""
	I0924 19:50:58.680016   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.680027   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:58.680034   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:58.680095   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:58.717081   70152 cri.go:89] found id: ""
	I0924 19:50:58.717104   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.717114   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:58.717121   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:58.717182   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:58.749093   70152 cri.go:89] found id: ""
	I0924 19:50:58.749115   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.749124   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:58.749129   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:58.749175   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:58.785026   70152 cri.go:89] found id: ""
	I0924 19:50:58.785056   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.785078   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:58.785086   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:58.785174   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:58.821615   70152 cri.go:89] found id: ""
	I0924 19:50:58.821641   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.821651   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:58.821658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:58.821718   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:58.857520   70152 cri.go:89] found id: ""
	I0924 19:50:58.857549   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.857561   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:58.857569   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:58.857638   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:58.892972   70152 cri.go:89] found id: ""
	I0924 19:50:58.892997   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.893008   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:58.893016   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:58.893082   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:58.924716   70152 cri.go:89] found id: ""
	I0924 19:50:58.924743   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.924756   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:58.924764   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:58.924776   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:58.961221   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:58.961249   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:59.013865   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:59.013892   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:59.028436   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:59.028472   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:59.099161   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:59.099187   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:59.099201   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:51:01.696298   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:01.709055   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:51:01.709132   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:51:01.741383   70152 cri.go:89] found id: ""
	I0924 19:51:01.741409   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.741416   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:51:01.741422   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:51:01.741476   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:51:01.773123   70152 cri.go:89] found id: ""
	I0924 19:51:01.773148   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.773156   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:51:01.773162   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:51:01.773221   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:51:01.806752   70152 cri.go:89] found id: ""
	I0924 19:51:01.806784   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.806792   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:51:01.806798   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:51:01.806928   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:51:01.851739   70152 cri.go:89] found id: ""
	I0924 19:51:01.851769   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.851780   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:51:01.851786   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:51:01.851850   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:51:01.885163   70152 cri.go:89] found id: ""
	I0924 19:51:01.885192   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.885201   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:51:01.885207   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:51:01.885255   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:51:01.918891   70152 cri.go:89] found id: ""
	I0924 19:51:01.918918   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.918929   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:51:01.918936   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:51:01.918996   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:51:01.953367   70152 cri.go:89] found id: ""
	I0924 19:51:01.953394   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.953403   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:51:01.953411   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:51:01.953468   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:51:01.993937   70152 cri.go:89] found id: ""
	I0924 19:51:01.993961   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.993970   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:51:01.993981   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:51:01.993993   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:51:02.049467   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:51:02.049503   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:51:02.065074   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:51:02.065117   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:51:02.141811   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:51:02.141837   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:51:02.141852   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:51:02.224507   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:51:02.224534   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:51:04.766806   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:04.779518   70152 kubeadm.go:597] duration metric: took 4m3.458373s to restartPrimaryControlPlane
	W0924 19:51:04.779588   70152 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:04.779617   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:09.285959   70152 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.506320559s)
	I0924 19:51:09.286033   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:09.299784   70152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:51:09.311238   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:51:09.320580   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:51:09.320603   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:51:09.320658   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:51:09.329216   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:51:09.329281   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:51:09.337964   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:51:09.346324   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:51:09.346383   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:51:09.354788   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:51:09.363191   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:51:09.363249   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:51:09.372141   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:51:09.380290   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:51:09.380344   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:51:09.388996   70152 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:51:09.456034   70152 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:51:09.456144   70152 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:51:09.585473   70152 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:51:09.585697   70152 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:51:09.585935   70152 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:51:09.749623   70152 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:51:09.751504   70152 out.go:235]   - Generating certificates and keys ...
	I0924 19:51:09.751599   70152 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:51:09.751702   70152 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:51:09.751845   70152 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:51:09.751955   70152 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:51:09.752059   70152 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:51:09.752137   70152 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:51:09.752237   70152 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:51:09.752332   70152 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:51:09.752430   70152 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:51:09.752536   70152 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:51:09.752602   70152 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:51:09.752683   70152 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:51:09.881554   70152 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:51:10.269203   70152 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:51:10.518480   70152 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:51:10.712060   70152 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:51:10.727454   70152 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:51:10.728411   70152 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:51:10.728478   70152 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:51:10.849448   70152 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:51:10.851100   70152 out.go:235]   - Booting up control plane ...
	I0924 19:51:10.851237   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:51:10.860097   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:51:10.860987   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:51:10.861716   70152 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:51:10.863845   70152 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:51:50.864744   70152 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:51:50.865098   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:51:50.865318   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:51:55.865870   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:51:55.866074   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:52:05.866171   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:52:05.866441   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:52:25.866986   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:52:25.867227   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:05.868563   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:05.868798   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:05.868811   70152 kubeadm.go:310] 
	I0924 19:53:05.868866   70152 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:53:05.868927   70152 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:53:05.868936   70152 kubeadm.go:310] 
	I0924 19:53:05.868989   70152 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:53:05.869037   70152 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:53:05.869201   70152 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:53:05.869212   70152 kubeadm.go:310] 
	I0924 19:53:05.869332   70152 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:53:05.869380   70152 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:53:05.869433   70152 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:53:05.869442   70152 kubeadm.go:310] 
	I0924 19:53:05.869555   70152 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:53:05.869664   70152 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:53:05.869674   70152 kubeadm.go:310] 
	I0924 19:53:05.869792   70152 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:53:05.869900   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:53:05.870003   70152 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:53:05.870132   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:53:05.870172   70152 kubeadm.go:310] 
	I0924 19:53:05.870425   70152 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:53:05.870536   70152 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:53:05.870658   70152 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 19:53:05.870869   70152 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 19:53:05.870918   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:53:06.301673   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:53:06.316103   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:53:06.326362   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:53:06.326396   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:53:06.326454   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:53:06.334687   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:53:06.334744   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:53:06.344175   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:53:06.352663   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:53:06.352725   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:53:06.361955   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:53:06.370584   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:53:06.370625   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:53:06.379590   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:53:06.388768   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:53:06.388825   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:53:06.397242   70152 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:53:06.469463   70152 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:53:06.469547   70152 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:53:06.606743   70152 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:53:06.606900   70152 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:53:06.607021   70152 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:53:06.778104   70152 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:53:06.780036   70152 out.go:235]   - Generating certificates and keys ...
	I0924 19:53:06.780148   70152 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:53:06.780241   70152 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:53:06.780359   70152 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:53:06.780451   70152 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:53:06.780578   70152 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:53:06.780654   70152 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:53:06.780753   70152 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:53:06.780852   70152 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:53:06.780972   70152 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:53:06.781119   70152 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:53:06.781178   70152 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:53:06.781254   70152 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:53:06.836315   70152 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:53:06.938657   70152 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:53:07.273070   70152 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:53:07.347309   70152 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:53:07.369112   70152 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:53:07.369777   70152 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:53:07.369866   70152 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:53:07.504122   70152 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:53:07.506006   70152 out.go:235]   - Booting up control plane ...
	I0924 19:53:07.506117   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:53:07.509132   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:53:07.509998   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:53:07.510662   70152 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:53:07.513856   70152 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:53:47.515377   70152 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:53:47.515684   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:47.515976   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:52.516646   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:52.516842   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:54:02.517539   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:54:02.517808   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:54:22.518364   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:54:22.518605   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:55:02.517378   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:55:02.517642   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:55:02.517672   70152 kubeadm.go:310] 
	I0924 19:55:02.517732   70152 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:55:02.517791   70152 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:55:02.517802   70152 kubeadm.go:310] 
	I0924 19:55:02.517880   70152 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:55:02.517943   70152 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:55:02.518090   70152 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:55:02.518102   70152 kubeadm.go:310] 
	I0924 19:55:02.518239   70152 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:55:02.518289   70152 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:55:02.518347   70152 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:55:02.518358   70152 kubeadm.go:310] 
	I0924 19:55:02.518488   70152 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:55:02.518565   70152 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:55:02.518572   70152 kubeadm.go:310] 
	I0924 19:55:02.518685   70152 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:55:02.518768   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:55:02.518891   70152 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:55:02.518991   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:55:02.519010   70152 kubeadm.go:310] 
	I0924 19:55:02.519626   70152 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:55:02.519745   70152 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:55:02.519839   70152 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 19:55:02.519914   70152 kubeadm.go:394] duration metric: took 8m1.249852968s to StartCluster
	I0924 19:55:02.519952   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:55:02.520008   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:55:02.552844   70152 cri.go:89] found id: ""
	I0924 19:55:02.552880   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.552891   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:55:02.552899   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:55:02.552956   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:55:02.582811   70152 cri.go:89] found id: ""
	I0924 19:55:02.582858   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.582869   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:55:02.582876   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:55:02.582929   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:55:02.614815   70152 cri.go:89] found id: ""
	I0924 19:55:02.614858   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.614868   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:55:02.614874   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:55:02.614920   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:55:02.644953   70152 cri.go:89] found id: ""
	I0924 19:55:02.644982   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.644991   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:55:02.644998   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:55:02.645053   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:55:02.680419   70152 cri.go:89] found id: ""
	I0924 19:55:02.680448   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.680458   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:55:02.680466   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:55:02.680525   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:55:02.713021   70152 cri.go:89] found id: ""
	I0924 19:55:02.713043   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.713051   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:55:02.713057   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:55:02.713118   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:55:02.748326   70152 cri.go:89] found id: ""
	I0924 19:55:02.748350   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.748358   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:55:02.748364   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:55:02.748416   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:55:02.780489   70152 cri.go:89] found id: ""
	I0924 19:55:02.780523   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.780546   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:55:02.780558   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:55:02.780572   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:55:02.830514   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:55:02.830550   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:55:02.845321   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:55:02.845349   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:55:02.909352   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:55:02.909383   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:55:02.909399   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:55:03.033937   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:55:03.033972   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0924 19:55:03.070531   70152 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 19:55:03.070611   70152 out.go:270] * 
	* 
	W0924 19:55:03.070682   70152 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:55:03.070701   70152 out.go:270] * 
	* 
	W0924 19:55:03.071559   70152 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 19:55:03.074921   70152 out.go:201] 
	W0924 19:55:03.076106   70152 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:55:03.076150   70152 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 19:55:03.076180   70152 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 19:55:03.077787   70152 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-510301 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510301 -n old-k8s-version-510301
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510301 -n old-k8s-version-510301: exit status 2 (225.655131ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-510301 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-510301 logs -n 25: (1.543680191s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-038637 sudo cat                              | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:38 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo find                             | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo crio                             | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-038637                                       | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-119609 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | disable-driver-mounts-119609                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:39 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-311319            | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-965745             | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-093771  | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-510301        | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-311319                 | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-965745                  | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-093771       | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:51 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-510301             | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 19:42:46
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 19:42:46.491955   70152 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:42:46.492212   70152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:42:46.492222   70152 out.go:358] Setting ErrFile to fd 2...
	I0924 19:42:46.492228   70152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:42:46.492386   70152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:42:46.492893   70152 out.go:352] Setting JSON to false
	I0924 19:42:46.493799   70152 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5117,"bootTime":1727201849,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 19:42:46.493899   70152 start.go:139] virtualization: kvm guest
	I0924 19:42:46.496073   70152 out.go:177] * [old-k8s-version-510301] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 19:42:46.497447   70152 notify.go:220] Checking for updates...
	I0924 19:42:46.497466   70152 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 19:42:46.498899   70152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 19:42:46.500315   70152 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:42:46.502038   70152 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:42:46.503591   70152 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 19:42:46.505010   70152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 19:42:46.506789   70152 config.go:182] Loaded profile config "old-k8s-version-510301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:42:46.507239   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:42:46.507282   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:42:46.522338   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43977
	I0924 19:42:46.522810   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:42:46.523430   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:42:46.523450   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:42:46.523809   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:42:46.523989   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:42:46.525830   70152 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 19:42:46.527032   70152 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 19:42:46.527327   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:42:46.527361   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:42:46.542427   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37825
	I0924 19:42:46.542782   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:42:46.543220   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:42:46.543237   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:42:46.543562   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:42:46.543731   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:42:46.577253   70152 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 19:42:46.578471   70152 start.go:297] selected driver: kvm2
	I0924 19:42:46.578486   70152 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:42:46.578620   70152 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 19:42:46.579480   70152 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:42:46.579576   70152 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 19:42:46.595023   70152 install.go:137] /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 19:42:46.595376   70152 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:42:46.595401   70152 cni.go:84] Creating CNI manager for ""
	I0924 19:42:46.595427   70152 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:42:46.595456   70152 start.go:340] cluster config:
	{Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:42:46.595544   70152 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:42:46.597600   70152 out.go:177] * Starting "old-k8s-version-510301" primary control-plane node in "old-k8s-version-510301" cluster
	I0924 19:42:49.587099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:42:46.599107   70152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:42:46.599145   70152 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 19:42:46.599157   70152 cache.go:56] Caching tarball of preloaded images
	I0924 19:42:46.599232   70152 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 19:42:46.599246   70152 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 19:42:46.599368   70152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json ...
	I0924 19:42:46.599577   70152 start.go:360] acquireMachinesLock for old-k8s-version-510301: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:42:52.659112   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:42:58.739082   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:01.811107   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:07.891031   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:10.963093   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:17.043125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:20.115055   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:26.195121   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:29.267111   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:35.347125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:38.419109   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:44.499098   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:47.571040   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:53.651128   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:56.723110   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:02.803080   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:05.875118   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:11.955117   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:15.027102   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:21.107097   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:24.179122   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:30.259099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:33.331130   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:39.411086   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:42.483063   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:48.563071   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:51.635087   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:57.715125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:00.787050   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:06.867122   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:09.939097   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:16.019098   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:19.091109   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:25.171099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:28.243075   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:34.323040   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:37.395180   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:43.475096   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:46.547060   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:52.627035   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:55.699131   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:58.703628   69576 start.go:364] duration metric: took 4m21.10107111s to acquireMachinesLock for "no-preload-965745"
	I0924 19:45:58.703677   69576 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:45:58.703682   69576 fix.go:54] fixHost starting: 
	I0924 19:45:58.704078   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:45:58.704123   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:45:58.719888   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32803
	I0924 19:45:58.720250   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:45:58.720694   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:45:58.720714   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:45:58.721073   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:45:58.721262   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:45:58.721419   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:45:58.723062   69576 fix.go:112] recreateIfNeeded on no-preload-965745: state=Stopped err=<nil>
	I0924 19:45:58.723086   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	W0924 19:45:58.723253   69576 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:45:58.725047   69576 out.go:177] * Restarting existing kvm2 VM for "no-preload-965745" ...
	I0924 19:45:58.701057   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:45:58.701123   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:45:58.701448   69408 buildroot.go:166] provisioning hostname "embed-certs-311319"
	I0924 19:45:58.701474   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:45:58.701688   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:45:58.703495   69408 machine.go:96] duration metric: took 4m37.423499364s to provisionDockerMachine
	I0924 19:45:58.703530   69408 fix.go:56] duration metric: took 4m37.446368089s for fixHost
	I0924 19:45:58.703536   69408 start.go:83] releasing machines lock for "embed-certs-311319", held for 4m37.446384972s
	W0924 19:45:58.703575   69408 start.go:714] error starting host: provision: host is not running
	W0924 19:45:58.703648   69408 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0924 19:45:58.703659   69408 start.go:729] Will try again in 5 seconds ...
	I0924 19:45:58.726232   69576 main.go:141] libmachine: (no-preload-965745) Calling .Start
	I0924 19:45:58.726397   69576 main.go:141] libmachine: (no-preload-965745) Ensuring networks are active...
	I0924 19:45:58.727100   69576 main.go:141] libmachine: (no-preload-965745) Ensuring network default is active
	I0924 19:45:58.727392   69576 main.go:141] libmachine: (no-preload-965745) Ensuring network mk-no-preload-965745 is active
	I0924 19:45:58.727758   69576 main.go:141] libmachine: (no-preload-965745) Getting domain xml...
	I0924 19:45:58.728339   69576 main.go:141] libmachine: (no-preload-965745) Creating domain...
	I0924 19:45:59.928391   69576 main.go:141] libmachine: (no-preload-965745) Waiting to get IP...
	I0924 19:45:59.929441   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:45:59.929931   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:45:59.929982   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:45:59.929905   70821 retry.go:31] will retry after 231.188723ms: waiting for machine to come up
	I0924 19:46:00.162502   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.162993   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.163021   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.162944   70821 retry.go:31] will retry after 278.953753ms: waiting for machine to come up
	I0924 19:46:00.443443   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.443868   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.443895   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.443830   70821 retry.go:31] will retry after 307.192984ms: waiting for machine to come up
	I0924 19:46:00.752227   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.752637   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.752666   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.752602   70821 retry.go:31] will retry after 596.967087ms: waiting for machine to come up
	I0924 19:46:01.351461   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:01.351906   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:01.351933   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:01.351859   70821 retry.go:31] will retry after 579.94365ms: waiting for machine to come up
	I0924 19:46:01.933682   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:01.934110   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:01.934141   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:01.934070   70821 retry.go:31] will retry after 862.980289ms: waiting for machine to come up
	I0924 19:46:03.705206   69408 start.go:360] acquireMachinesLock for embed-certs-311319: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:46:02.799129   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:02.799442   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:02.799471   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:02.799394   70821 retry.go:31] will retry after 992.898394ms: waiting for machine to come up
	I0924 19:46:03.794034   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:03.794462   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:03.794518   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:03.794440   70821 retry.go:31] will retry after 917.82796ms: waiting for machine to come up
	I0924 19:46:04.713515   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:04.713888   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:04.713911   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:04.713861   70821 retry.go:31] will retry after 1.30142733s: waiting for machine to come up
	I0924 19:46:06.017327   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:06.017868   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:06.017891   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:06.017835   70821 retry.go:31] will retry after 1.585023602s: waiting for machine to come up
	I0924 19:46:07.603787   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:07.604129   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:07.604148   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:07.604108   70821 retry.go:31] will retry after 2.382871382s: waiting for machine to come up
	I0924 19:46:09.989065   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:09.989530   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:09.989592   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:09.989504   70821 retry.go:31] will retry after 3.009655055s: waiting for machine to come up
	I0924 19:46:17.011094   69904 start.go:364] duration metric: took 3m57.677491969s to acquireMachinesLock for "default-k8s-diff-port-093771"
	I0924 19:46:17.011169   69904 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:17.011180   69904 fix.go:54] fixHost starting: 
	I0924 19:46:17.011578   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:17.011648   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:17.030756   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I0924 19:46:17.031186   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:17.031698   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:46:17.031722   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:17.032028   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:17.032198   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:17.032340   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:46:17.033737   69904 fix.go:112] recreateIfNeeded on default-k8s-diff-port-093771: state=Stopped err=<nil>
	I0924 19:46:17.033761   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	W0924 19:46:17.033912   69904 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:17.036154   69904 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-093771" ...
	I0924 19:46:13.001046   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:13.001487   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:13.001518   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:13.001448   70821 retry.go:31] will retry after 2.789870388s: waiting for machine to come up
	I0924 19:46:15.792496   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.793014   69576 main.go:141] libmachine: (no-preload-965745) Found IP for machine: 192.168.39.134
	I0924 19:46:15.793035   69576 main.go:141] libmachine: (no-preload-965745) Reserving static IP address...
	I0924 19:46:15.793051   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has current primary IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.793564   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "no-preload-965745", mac: "52:54:00:c4:4b:79", ip: "192.168.39.134"} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.793590   69576 main.go:141] libmachine: (no-preload-965745) DBG | skip adding static IP to network mk-no-preload-965745 - found existing host DHCP lease matching {name: "no-preload-965745", mac: "52:54:00:c4:4b:79", ip: "192.168.39.134"}
	I0924 19:46:15.793602   69576 main.go:141] libmachine: (no-preload-965745) Reserved static IP address: 192.168.39.134
	I0924 19:46:15.793631   69576 main.go:141] libmachine: (no-preload-965745) DBG | Getting to WaitForSSH function...
	I0924 19:46:15.793643   69576 main.go:141] libmachine: (no-preload-965745) Waiting for SSH to be available...
	I0924 19:46:15.795732   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.796002   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.796023   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.796169   69576 main.go:141] libmachine: (no-preload-965745) DBG | Using SSH client type: external
	I0924 19:46:15.796196   69576 main.go:141] libmachine: (no-preload-965745) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa (-rw-------)
	I0924 19:46:15.796227   69576 main.go:141] libmachine: (no-preload-965745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:15.796241   69576 main.go:141] libmachine: (no-preload-965745) DBG | About to run SSH command:
	I0924 19:46:15.796247   69576 main.go:141] libmachine: (no-preload-965745) DBG | exit 0
	I0924 19:46:15.922480   69576 main.go:141] libmachine: (no-preload-965745) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:15.922886   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetConfigRaw
	I0924 19:46:15.923532   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:15.925814   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.926152   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.926180   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.926341   69576 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/config.json ...
	I0924 19:46:15.926506   69576 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:15.926523   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:15.926755   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:15.929175   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.929512   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.929539   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.929647   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:15.929805   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:15.929956   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:15.930041   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:15.930184   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:15.930374   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:15.930386   69576 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:16.038990   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:16.039018   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.039241   69576 buildroot.go:166] provisioning hostname "no-preload-965745"
	I0924 19:46:16.039266   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.039459   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.042183   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.042567   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.042603   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.042728   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.042929   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.043085   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.043264   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.043431   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.043611   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.043624   69576 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-965745 && echo "no-preload-965745" | sudo tee /etc/hostname
	I0924 19:46:16.163262   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-965745
	
	I0924 19:46:16.163289   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.165935   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.166256   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.166276   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.166415   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.166602   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.166728   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.166876   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.167005   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.167219   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.167244   69576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-965745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-965745/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-965745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:16.282661   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:16.282689   69576 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:16.282714   69576 buildroot.go:174] setting up certificates
	I0924 19:46:16.282723   69576 provision.go:84] configureAuth start
	I0924 19:46:16.282734   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.283017   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:16.285665   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.286113   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.286140   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.286283   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.288440   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.288750   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.288775   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.288932   69576 provision.go:143] copyHostCerts
	I0924 19:46:16.288984   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:16.288996   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:16.289093   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:16.289206   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:16.289221   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:16.289265   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:16.289341   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:16.289350   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:16.289385   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:16.289451   69576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.no-preload-965745 san=[127.0.0.1 192.168.39.134 localhost minikube no-preload-965745]
	I0924 19:46:16.400236   69576 provision.go:177] copyRemoteCerts
	I0924 19:46:16.400302   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:16.400330   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.402770   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.403069   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.403107   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.403226   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.403415   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.403678   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.403826   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:16.488224   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:16.509856   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 19:46:16.531212   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:16.552758   69576 provision.go:87] duration metric: took 270.023746ms to configureAuth
	I0924 19:46:16.552787   69576 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:16.552980   69576 config.go:182] Loaded profile config "no-preload-965745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:16.553045   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.555463   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.555792   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.555812   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.555992   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.556190   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.556337   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.556447   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.556569   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.556756   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.556774   69576 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:16.777283   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:16.777305   69576 machine.go:96] duration metric: took 850.787273ms to provisionDockerMachine
	I0924 19:46:16.777318   69576 start.go:293] postStartSetup for "no-preload-965745" (driver="kvm2")
	I0924 19:46:16.777330   69576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:16.777348   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:16.777726   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:16.777751   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.780187   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.780591   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.780632   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.780812   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.781015   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.781163   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.781359   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:16.864642   69576 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:16.868296   69576 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:16.868317   69576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:16.868379   69576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:16.868456   69576 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:16.868549   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:16.877019   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:16.898717   69576 start.go:296] duration metric: took 121.386885ms for postStartSetup
	I0924 19:46:16.898752   69576 fix.go:56] duration metric: took 18.195069583s for fixHost
	I0924 19:46:16.898772   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.901284   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.901593   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.901620   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.901773   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.901965   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.902143   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.902278   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.902416   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.902572   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.902580   69576 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:17.010942   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207176.987992125
	
	I0924 19:46:17.010968   69576 fix.go:216] guest clock: 1727207176.987992125
	I0924 19:46:17.010977   69576 fix.go:229] Guest: 2024-09-24 19:46:16.987992125 +0000 UTC Remote: 2024-09-24 19:46:16.898755451 +0000 UTC m=+279.432619611 (delta=89.236674ms)
	I0924 19:46:17.011002   69576 fix.go:200] guest clock delta is within tolerance: 89.236674ms
	I0924 19:46:17.011008   69576 start.go:83] releasing machines lock for "no-preload-965745", held for 18.307345605s
	I0924 19:46:17.011044   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.011314   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:17.014130   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.014475   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.014510   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.014661   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015160   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015331   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015443   69576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:17.015485   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:17.015512   69576 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:17.015536   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:17.018062   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018324   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018392   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.018416   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018531   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:17.018681   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:17.018754   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.018805   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018814   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:17.018956   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:17.019039   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:17.019130   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:17.019295   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:17.019483   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:17.120138   69576 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:17.125567   69576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:17.269403   69576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:17.275170   69576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:17.275229   69576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:17.290350   69576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:17.290374   69576 start.go:495] detecting cgroup driver to use...
	I0924 19:46:17.290437   69576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:17.310059   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:17.323377   69576 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:17.323440   69576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:17.336247   69576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:17.349168   69576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:17.461240   69576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:17.606562   69576 docker.go:233] disabling docker service ...
	I0924 19:46:17.606632   69576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:17.623001   69576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:17.637472   69576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:17.778735   69576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:17.905408   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:17.921465   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:17.938193   69576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:46:17.938265   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.947686   69576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:17.947748   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.957230   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.966507   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.975768   69576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:17.985288   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.995405   69576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:18.011401   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:18.024030   69576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:18.034873   69576 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:18.034939   69576 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:18.047359   69576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:18.057288   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:18.181067   69576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:18.272703   69576 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:18.272779   69576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:18.277272   69576 start.go:563] Will wait 60s for crictl version
	I0924 19:46:18.277338   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.280914   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:18.319509   69576 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:18.319603   69576 ssh_runner.go:195] Run: crio --version
	I0924 19:46:18.349619   69576 ssh_runner.go:195] Run: crio --version
	I0924 19:46:18.376567   69576 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:46:17.037598   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Start
	I0924 19:46:17.037763   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring networks are active...
	I0924 19:46:17.038517   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring network default is active
	I0924 19:46:17.038875   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring network mk-default-k8s-diff-port-093771 is active
	I0924 19:46:17.039247   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Getting domain xml...
	I0924 19:46:17.039971   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Creating domain...
	I0924 19:46:18.369133   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting to get IP...
	I0924 19:46:18.370069   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.370537   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.370589   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.370490   70958 retry.go:31] will retry after 309.496724ms: waiting for machine to come up
	I0924 19:46:18.682355   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.682933   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.682982   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.682901   70958 retry.go:31] will retry after 274.120659ms: waiting for machine to come up
	I0924 19:46:18.958554   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.959017   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.959044   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.958981   70958 retry.go:31] will retry after 301.44935ms: waiting for machine to come up
	I0924 19:46:18.377928   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:18.380767   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:18.381227   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:18.381343   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:18.381519   69576 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:18.385510   69576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:18.398125   69576 kubeadm.go:883] updating cluster {Name:no-preload-965745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:18.398269   69576 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:46:18.398324   69576 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:18.433136   69576 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:46:18.433158   69576 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 19:46:18.433221   69576 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:18.433232   69576 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.433266   69576 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.433288   69576 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.433295   69576 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.433348   69576 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.433369   69576 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0924 19:46:18.433406   69576 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.435096   69576 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.435095   69576 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.435130   69576 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0924 19:46:18.435125   69576 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.435167   69576 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.435282   69576 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.435312   69576 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:18.435355   69576 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.586269   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.594361   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.594399   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.595814   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.600629   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.625054   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.626264   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0924 19:46:18.648420   69576 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0924 19:46:18.648471   69576 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.648519   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.736906   69576 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0924 19:46:18.736967   69576 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.736995   69576 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0924 19:46:18.737033   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.737038   69576 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.736924   69576 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0924 19:46:18.737086   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.737094   69576 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.737129   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.738294   69576 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0924 19:46:18.738322   69576 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.738372   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.759842   69576 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0924 19:46:18.759877   69576 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.759920   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.863913   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.864011   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.863924   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.863940   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.863970   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.863980   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.982915   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.982954   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.983003   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:19.005899   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:19.005922   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:19.005993   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:19.085255   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:19.085357   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:19.085385   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:19.140884   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:19.140951   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:19.141049   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:19.186906   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0924 19:46:19.187032   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.190934   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0924 19:46:19.191034   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:19.219210   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0924 19:46:19.219345   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:19.250400   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0924 19:46:19.250433   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0924 19:46:19.250510   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:19.250510   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0924 19:46:19.250541   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0924 19:46:19.250557   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.250511   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:19.250575   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0924 19:46:19.250589   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:19.250595   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0924 19:46:19.250597   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.263357   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0924 19:46:19.422736   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:21.705978   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.455378333s)
	I0924 19:46:21.706013   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.455386133s)
	I0924 19:46:21.706050   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0924 19:46:21.706075   69576 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:21.706086   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.455478137s)
	I0924 19:46:21.706116   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0924 19:46:21.706023   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0924 19:46:21.706127   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:21.706162   69576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.283401294s)
	I0924 19:46:21.706195   69576 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0924 19:46:21.706223   69576 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:21.706267   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:19.262500   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.263016   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.263065   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:19.262997   70958 retry.go:31] will retry after 463.004617ms: waiting for machine to come up
	I0924 19:46:19.727528   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.728017   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.728039   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:19.727972   70958 retry.go:31] will retry after 463.942506ms: waiting for machine to come up
	I0924 19:46:20.193614   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.194039   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.194066   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:20.193993   70958 retry.go:31] will retry after 595.200456ms: waiting for machine to come up
	I0924 19:46:20.790814   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.791264   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.791290   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:20.791229   70958 retry.go:31] will retry after 862.850861ms: waiting for machine to come up
	I0924 19:46:21.655227   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:21.655702   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:21.655732   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:21.655652   70958 retry.go:31] will retry after 1.436744818s: waiting for machine to come up
	I0924 19:46:23.093891   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:23.094619   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:23.094652   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:23.094545   70958 retry.go:31] will retry after 1.670034049s: waiting for machine to come up
	I0924 19:46:23.573866   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.867718194s)
	I0924 19:46:23.573911   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0924 19:46:23.573942   69576 ssh_runner.go:235] Completed: which crictl: (1.867653076s)
	I0924 19:46:23.574009   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:23.573947   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:23.574079   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:24.924292   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.35018601s)
	I0924 19:46:24.924325   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0924 19:46:24.924325   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.350292754s)
	I0924 19:46:24.924351   69576 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:24.924400   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:24.924400   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:24.765982   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:24.766453   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:24.766486   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:24.766399   70958 retry.go:31] will retry after 2.142103801s: waiting for machine to come up
	I0924 19:46:26.911998   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:26.912395   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:26.912425   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:26.912350   70958 retry.go:31] will retry after 1.90953864s: waiting for machine to come up
	I0924 19:46:28.823807   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:28.824294   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:28.824324   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:28.824242   70958 retry.go:31] will retry after 2.249657554s: waiting for machine to come up
	I0924 19:46:28.202705   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.278273074s)
	I0924 19:46:28.202736   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0924 19:46:28.202759   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:28.202781   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.278300546s)
	I0924 19:46:28.202798   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:28.202862   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:29.870161   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.667334937s)
	I0924 19:46:29.870195   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0924 19:46:29.870161   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.667273921s)
	I0924 19:46:29.870218   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:29.870248   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0924 19:46:29.870269   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:29.870357   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.922800   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.05250542s)
	I0924 19:46:31.922865   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0924 19:46:31.922894   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.052511751s)
	I0924 19:46:31.922928   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0924 19:46:31.922938   69576 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.922996   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.076197   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:31.076624   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:31.076660   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:31.076579   70958 retry.go:31] will retry after 3.538260641s: waiting for machine to come up
	I0924 19:46:35.823566   70152 start.go:364] duration metric: took 3m49.223945366s to acquireMachinesLock for "old-k8s-version-510301"
	I0924 19:46:35.823654   70152 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:35.823666   70152 fix.go:54] fixHost starting: 
	I0924 19:46:35.824101   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:35.824161   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:35.844327   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38055
	I0924 19:46:35.844741   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:35.845377   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:46:35.845402   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:35.845769   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:35.845997   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:35.846186   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetState
	I0924 19:46:35.847728   70152 fix.go:112] recreateIfNeeded on old-k8s-version-510301: state=Stopped err=<nil>
	I0924 19:46:35.847754   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	W0924 19:46:35.847912   70152 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:35.849981   70152 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-510301" ...
	I0924 19:46:35.851388   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .Start
	I0924 19:46:35.851573   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring networks are active...
	I0924 19:46:35.852445   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring network default is active
	I0924 19:46:35.852832   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring network mk-old-k8s-version-510301 is active
	I0924 19:46:35.853342   70152 main.go:141] libmachine: (old-k8s-version-510301) Getting domain xml...
	I0924 19:46:35.854028   70152 main.go:141] libmachine: (old-k8s-version-510301) Creating domain...
	I0924 19:46:34.618473   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.618980   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Found IP for machine: 192.168.50.116
	I0924 19:46:34.619006   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Reserving static IP address...
	I0924 19:46:34.619022   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has current primary IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.619475   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-093771", mac: "52:54:00:21:4a:f5", ip: "192.168.50.116"} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.619520   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Reserved static IP address: 192.168.50.116
	I0924 19:46:34.619540   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | skip adding static IP to network mk-default-k8s-diff-port-093771 - found existing host DHCP lease matching {name: "default-k8s-diff-port-093771", mac: "52:54:00:21:4a:f5", ip: "192.168.50.116"}
	I0924 19:46:34.619559   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Getting to WaitForSSH function...
	I0924 19:46:34.619573   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for SSH to be available...
	I0924 19:46:34.621893   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.622318   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.622346   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.622525   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Using SSH client type: external
	I0924 19:46:34.622553   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa (-rw-------)
	I0924 19:46:34.622584   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:34.622603   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | About to run SSH command:
	I0924 19:46:34.622621   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | exit 0
	I0924 19:46:34.746905   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:34.747246   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetConfigRaw
	I0924 19:46:34.747867   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:34.750507   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.751020   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.751052   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.751327   69904 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/config.json ...
	I0924 19:46:34.751516   69904 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:34.751533   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:34.751773   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.754088   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.754380   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.754400   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.754510   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.754703   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.754988   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.755201   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.755479   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.755714   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.755727   69904 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:34.854791   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:34.854816   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:34.855126   69904 buildroot.go:166] provisioning hostname "default-k8s-diff-port-093771"
	I0924 19:46:34.855157   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:34.855362   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.858116   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.858459   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.858491   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.858639   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.858821   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.859002   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.859124   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.859281   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.859444   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.859458   69904 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-093771 && echo "default-k8s-diff-port-093771" | sudo tee /etc/hostname
	I0924 19:46:34.974247   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-093771
	
	I0924 19:46:34.974285   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.977117   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.977514   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.977544   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.977781   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.978011   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.978184   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.978326   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.978512   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.978736   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.978761   69904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-093771' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-093771/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-093771' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:35.096102   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:35.096132   69904 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:35.096172   69904 buildroot.go:174] setting up certificates
	I0924 19:46:35.096182   69904 provision.go:84] configureAuth start
	I0924 19:46:35.096192   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:35.096501   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:35.099177   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.099529   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.099563   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.099743   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.102392   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.102744   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.102771   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.102941   69904 provision.go:143] copyHostCerts
	I0924 19:46:35.102988   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:35.102996   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:35.103053   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:35.103147   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:35.103155   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:35.103176   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:35.103229   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:35.103237   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:35.103255   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:35.103319   69904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-093771 san=[127.0.0.1 192.168.50.116 default-k8s-diff-port-093771 localhost minikube]
	I0924 19:46:35.213279   69904 provision.go:177] copyRemoteCerts
	I0924 19:46:35.213364   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:35.213396   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.216668   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.217114   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.217150   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.217374   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.217544   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.217759   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.217937   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.300483   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:35.323893   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0924 19:46:35.346838   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:35.368788   69904 provision.go:87] duration metric: took 272.591773ms to configureAuth
	I0924 19:46:35.368819   69904 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:35.369032   69904 config.go:182] Loaded profile config "default-k8s-diff-port-093771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:35.369107   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.372264   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.372571   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.372601   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.372833   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.373033   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.373221   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.373395   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.373595   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:35.373768   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:35.373800   69904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:35.593954   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:35.593983   69904 machine.go:96] duration metric: took 842.454798ms to provisionDockerMachine
	I0924 19:46:35.593998   69904 start.go:293] postStartSetup for "default-k8s-diff-port-093771" (driver="kvm2")
	I0924 19:46:35.594011   69904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:35.594032   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.594381   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:35.594415   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.597073   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.597475   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.597531   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.597668   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.597886   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.598061   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.598225   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.677749   69904 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:35.682185   69904 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:35.682220   69904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:35.682302   69904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:35.682402   69904 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:35.682514   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:35.692308   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:35.717006   69904 start.go:296] duration metric: took 122.993776ms for postStartSetup
	I0924 19:46:35.717045   69904 fix.go:56] duration metric: took 18.705866197s for fixHost
	I0924 19:46:35.717069   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.720111   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.720478   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.720507   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.720702   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.720913   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.721078   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.721208   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.721368   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:35.721547   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:35.721558   69904 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:35.823421   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207195.798332273
	
	I0924 19:46:35.823444   69904 fix.go:216] guest clock: 1727207195.798332273
	I0924 19:46:35.823454   69904 fix.go:229] Guest: 2024-09-24 19:46:35.798332273 +0000 UTC Remote: 2024-09-24 19:46:35.717049796 +0000 UTC m=+256.522802974 (delta=81.282477ms)
	I0924 19:46:35.823478   69904 fix.go:200] guest clock delta is within tolerance: 81.282477ms
	I0924 19:46:35.823484   69904 start.go:83] releasing machines lock for "default-k8s-diff-port-093771", held for 18.812344302s
	I0924 19:46:35.823511   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.823795   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:35.827240   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.827580   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.827612   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.827798   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828501   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828695   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828788   69904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:35.828840   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.828982   69904 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:35.829022   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.831719   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.831888   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832098   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.832125   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832350   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.832419   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.832446   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832518   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.832608   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.832688   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.832761   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.832834   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.832898   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.833000   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.913010   69904 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:35.936917   69904 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:36.082528   69904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:36.090012   69904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:36.090111   69904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:36.109409   69904 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:36.109434   69904 start.go:495] detecting cgroup driver to use...
	I0924 19:46:36.109509   69904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:36.130226   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:36.142975   69904 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:36.143037   69904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:36.159722   69904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:36.174702   69904 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:36.315361   69904 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:36.491190   69904 docker.go:233] disabling docker service ...
	I0924 19:46:36.491259   69904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:36.513843   69904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:36.530208   69904 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:36.658600   69904 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:36.806048   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:36.821825   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:36.841750   69904 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:46:36.841819   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.853349   69904 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:36.853432   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.865214   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.877600   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.889363   69904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:36.901434   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.911763   69904 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.929057   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.939719   69904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:36.949326   69904 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:36.949399   69904 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:36.969647   69904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:36.984522   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:37.132041   69904 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:37.238531   69904 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:37.238638   69904 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:37.243752   69904 start.go:563] Will wait 60s for crictl version
	I0924 19:46:37.243811   69904 ssh_runner.go:195] Run: which crictl
	I0924 19:46:37.247683   69904 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:37.282843   69904 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:37.282932   69904 ssh_runner.go:195] Run: crio --version
	I0924 19:46:37.318022   69904 ssh_runner.go:195] Run: crio --version
	I0924 19:46:37.356586   69904 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:46:32.569181   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0924 19:46:32.569229   69576 cache_images.go:123] Successfully loaded all cached images
	I0924 19:46:32.569236   69576 cache_images.go:92] duration metric: took 14.136066072s to LoadCachedImages
	I0924 19:46:32.569250   69576 kubeadm.go:934] updating node { 192.168.39.134 8443 v1.31.1 crio true true} ...
	I0924 19:46:32.569372   69576 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-965745 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:46:32.569453   69576 ssh_runner.go:195] Run: crio config
	I0924 19:46:32.610207   69576 cni.go:84] Creating CNI manager for ""
	I0924 19:46:32.610236   69576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:32.610247   69576 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:46:32.610284   69576 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.134 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-965745 NodeName:no-preload-965745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:46:32.610407   69576 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-965745"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:46:32.610465   69576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:46:32.620532   69576 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:46:32.620616   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:46:32.629642   69576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 19:46:32.644863   69576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:46:32.659420   69576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0924 19:46:32.674590   69576 ssh_runner.go:195] Run: grep 192.168.39.134	control-plane.minikube.internal$ /etc/hosts
	I0924 19:46:32.677861   69576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:32.688560   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:32.791827   69576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:32.807240   69576 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745 for IP: 192.168.39.134
	I0924 19:46:32.807266   69576 certs.go:194] generating shared ca certs ...
	I0924 19:46:32.807286   69576 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:32.807447   69576 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:46:32.807502   69576 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:46:32.807515   69576 certs.go:256] generating profile certs ...
	I0924 19:46:32.807645   69576 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/client.key
	I0924 19:46:32.807736   69576 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.key.6934b726
	I0924 19:46:32.807799   69576 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.key
	I0924 19:46:32.807950   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:46:32.807997   69576 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:46:32.808011   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:46:32.808045   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:46:32.808076   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:46:32.808111   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:46:32.808168   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:32.809039   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:46:32.866086   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:46:32.892458   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:46:32.925601   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:46:32.956936   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 19:46:32.979570   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 19:46:33.001159   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:46:33.022216   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 19:46:33.044213   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:46:33.065352   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:46:33.086229   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:46:33.107040   69576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:46:33.122285   69576 ssh_runner.go:195] Run: openssl version
	I0924 19:46:33.127664   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:46:33.137277   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.141239   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.141289   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.146498   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:46:33.156352   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:46:33.166235   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.170189   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.170233   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.175345   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:46:33.185095   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:46:33.194846   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.199024   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.199084   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.204244   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:46:33.214142   69576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:46:33.218178   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:46:33.223659   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:46:33.228914   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:46:33.234183   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:46:33.239611   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:46:33.244844   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:46:33.250012   69576 kubeadm.go:392] StartCluster: {Name:no-preload-965745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:46:33.250094   69576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:46:33.250128   69576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:33.282919   69576 cri.go:89] found id: ""
	I0924 19:46:33.282980   69576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:46:33.292578   69576 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:46:33.292605   69576 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:46:33.292665   69576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:46:33.301695   69576 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:46:33.303477   69576 kubeconfig.go:125] found "no-preload-965745" server: "https://192.168.39.134:8443"
	I0924 19:46:33.306052   69576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:46:33.314805   69576 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.134
	I0924 19:46:33.314843   69576 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:46:33.314857   69576 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:46:33.314907   69576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:33.346457   69576 cri.go:89] found id: ""
	I0924 19:46:33.346523   69576 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:46:33.361257   69576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:46:33.370192   69576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:46:33.370209   69576 kubeadm.go:157] found existing configuration files:
	
	I0924 19:46:33.370246   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:46:33.378693   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:46:33.378735   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:46:33.387379   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:46:33.395516   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:46:33.395555   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:46:33.404216   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:46:33.412518   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:46:33.412564   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:46:33.421332   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:46:33.430004   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:46:33.430067   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:46:33.438769   69576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:46:33.447918   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:33.547090   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.162139   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.345688   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.400915   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.479925   69576 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:46:34.480005   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:34.980773   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:35.480568   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:35.515707   69576 api_server.go:72] duration metric: took 1.035779291s to wait for apiserver process to appear ...
	I0924 19:46:35.515736   69576 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:46:35.515759   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:37.357928   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:37.361222   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:37.361720   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:37.361763   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:37.362089   69904 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:37.366395   69904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:37.383334   69904 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-093771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:37.383451   69904 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:46:37.383503   69904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:37.425454   69904 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:46:37.425528   69904 ssh_runner.go:195] Run: which lz4
	I0924 19:46:37.430589   69904 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:46:37.435668   69904 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:46:37.435702   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 19:46:38.688183   69904 crio.go:462] duration metric: took 1.257629121s to copy over tarball
	I0924 19:46:38.688265   69904 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:46:38.577925   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:38.577956   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:38.577971   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:38.617929   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:38.617970   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:39.015942   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:39.024069   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:39.024108   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:39.516830   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:39.522389   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:39.522423   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:40.015905   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:40.024316   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:40.024344   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:40.515871   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:40.524708   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0924 19:46:40.533300   69576 api_server.go:141] control plane version: v1.31.1
	I0924 19:46:40.533330   69576 api_server.go:131] duration metric: took 5.017586868s to wait for apiserver health ...
	I0924 19:46:40.533341   69576 cni.go:84] Creating CNI manager for ""
	I0924 19:46:40.533350   69576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:40.535207   69576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:46:37.184620   70152 main.go:141] libmachine: (old-k8s-version-510301) Waiting to get IP...
	I0924 19:46:37.185660   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.186074   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.186151   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.186052   71118 retry.go:31] will retry after 294.949392ms: waiting for machine to come up
	I0924 19:46:37.482814   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.483327   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.483356   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.483268   71118 retry.go:31] will retry after 344.498534ms: waiting for machine to come up
	I0924 19:46:37.830045   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.830715   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.830748   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.830647   71118 retry.go:31] will retry after 342.025563ms: waiting for machine to come up
	I0924 19:46:38.174408   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:38.176008   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:38.176040   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:38.175906   71118 retry.go:31] will retry after 456.814011ms: waiting for machine to come up
	I0924 19:46:38.634792   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:38.635533   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:38.635566   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:38.635443   71118 retry.go:31] will retry after 582.88697ms: waiting for machine to come up
	I0924 19:46:39.220373   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:39.220869   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:39.220899   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:39.220811   71118 retry.go:31] will retry after 648.981338ms: waiting for machine to come up
	I0924 19:46:39.872016   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:39.872615   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:39.872645   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:39.872571   71118 retry.go:31] will retry after 1.138842254s: waiting for machine to come up
	I0924 19:46:41.012974   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:41.013539   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:41.013575   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:41.013489   71118 retry.go:31] will retry after 996.193977ms: waiting for machine to come up
	I0924 19:46:40.536733   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:46:40.547944   69576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:46:40.577608   69576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:46:40.595845   69576 system_pods.go:59] 8 kube-system pods found
	I0924 19:46:40.595910   69576 system_pods.go:61] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:46:40.595922   69576 system_pods.go:61] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:46:40.595934   69576 system_pods.go:61] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:46:40.595947   69576 system_pods.go:61] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:46:40.595957   69576 system_pods.go:61] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 19:46:40.595967   69576 system_pods.go:61] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:46:40.595980   69576 system_pods.go:61] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:46:40.595986   69576 system_pods.go:61] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:46:40.595995   69576 system_pods.go:74] duration metric: took 18.365618ms to wait for pod list to return data ...
	I0924 19:46:40.596006   69576 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:46:40.599781   69576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:46:40.599809   69576 node_conditions.go:123] node cpu capacity is 2
	I0924 19:46:40.599822   69576 node_conditions.go:105] duration metric: took 3.810089ms to run NodePressure ...
	I0924 19:46:40.599842   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:40.916081   69576 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:46:40.921516   69576 kubeadm.go:739] kubelet initialised
	I0924 19:46:40.921545   69576 kubeadm.go:740] duration metric: took 5.434388ms waiting for restarted kubelet to initialise ...
	I0924 19:46:40.921569   69576 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:40.926954   69576 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.931807   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.931825   69576 pod_ready.go:82] duration metric: took 4.85217ms for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.931833   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.931840   69576 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.936614   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "etcd-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.936636   69576 pod_ready.go:82] duration metric: took 4.788888ms for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.936646   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "etcd-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.936654   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.941669   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-apiserver-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.941684   69576 pod_ready.go:82] duration metric: took 5.022921ms for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.941691   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-apiserver-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.941697   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.981457   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.981487   69576 pod_ready.go:82] duration metric: took 39.779589ms for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.981500   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.981512   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:41.381145   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-proxy-ng8vf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.381172   69576 pod_ready.go:82] duration metric: took 399.651445ms for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:41.381183   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-proxy-ng8vf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.381191   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:41.780780   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-scheduler-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.780802   69576 pod_ready.go:82] duration metric: took 399.60413ms for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:41.780811   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-scheduler-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.780818   69576 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:42.181235   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:42.181264   69576 pod_ready.go:82] duration metric: took 400.43573ms for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:42.181278   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:42.181287   69576 pod_ready.go:39] duration metric: took 1.259692411s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:42.181306   69576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:46:42.192253   69576 ops.go:34] apiserver oom_adj: -16
	I0924 19:46:42.192274   69576 kubeadm.go:597] duration metric: took 8.899661487s to restartPrimaryControlPlane
	I0924 19:46:42.192285   69576 kubeadm.go:394] duration metric: took 8.942279683s to StartCluster
	I0924 19:46:42.192302   69576 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:42.192388   69576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:46:42.194586   69576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:42.194926   69576 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:46:42.195047   69576 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:46:42.195118   69576 addons.go:69] Setting storage-provisioner=true in profile "no-preload-965745"
	I0924 19:46:42.195137   69576 addons.go:234] Setting addon storage-provisioner=true in "no-preload-965745"
	W0924 19:46:42.195145   69576 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:46:42.195150   69576 addons.go:69] Setting default-storageclass=true in profile "no-preload-965745"
	I0924 19:46:42.195167   69576 addons.go:69] Setting metrics-server=true in profile "no-preload-965745"
	I0924 19:46:42.195174   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.195177   69576 config.go:182] Loaded profile config "no-preload-965745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:42.195182   69576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-965745"
	I0924 19:46:42.195185   69576 addons.go:234] Setting addon metrics-server=true in "no-preload-965745"
	W0924 19:46:42.195194   69576 addons.go:243] addon metrics-server should already be in state true
	I0924 19:46:42.195219   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.195593   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195609   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195629   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195643   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.195658   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.195736   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.196723   69576 out.go:177] * Verifying Kubernetes components...
	I0924 19:46:42.198152   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:42.212617   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0924 19:46:42.213165   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.213669   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.213695   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.214078   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.214268   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.216100   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45549
	I0924 19:46:42.216467   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.216915   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.216934   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.217274   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.217317   69576 addons.go:234] Setting addon default-storageclass=true in "no-preload-965745"
	W0924 19:46:42.217329   69576 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:46:42.217357   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.217629   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.217666   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.217870   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.217915   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.236569   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I0924 19:46:42.236995   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.236999   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I0924 19:46:42.237477   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.237606   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.237630   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.237989   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.238081   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.238103   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.238605   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.238645   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.238851   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.239570   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.239624   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.243303   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0924 19:46:42.243749   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.244205   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.244225   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.244541   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.244860   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.246518   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.248349   69576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:42.249690   69576 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:46:42.249706   69576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:46:42.249724   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.256169   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0924 19:46:42.256413   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.256626   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.256648   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.256801   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.256952   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.257080   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.257136   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.257247   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.257656   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.257671   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.257975   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.258190   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.259449   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34329
	I0924 19:46:42.259667   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.260521   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.260996   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.261009   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.261374   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.261457   69576 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:46:42.261544   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.262754   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:46:42.262769   69576 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:46:42.262787   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.263351   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.263661   69576 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:46:42.263677   69576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:46:42.263691   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.266205   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.266653   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.266672   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.266974   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.267122   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.267234   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.267342   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.267589   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.267935   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.267951   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.268213   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.268331   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.268417   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.268562   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.408715   69576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:42.425635   69576 node_ready.go:35] waiting up to 6m0s for node "no-preload-965745" to be "Ready" ...
	I0924 19:46:40.944536   69904 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256242572s)
	I0924 19:46:40.944565   69904 crio.go:469] duration metric: took 2.25635162s to extract the tarball
	I0924 19:46:40.944574   69904 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:46:40.981609   69904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:41.019006   69904 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 19:46:41.019026   69904 cache_images.go:84] Images are preloaded, skipping loading
	I0924 19:46:41.019035   69904 kubeadm.go:934] updating node { 192.168.50.116 8444 v1.31.1 crio true true} ...
	I0924 19:46:41.019146   69904 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-093771 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:46:41.019233   69904 ssh_runner.go:195] Run: crio config
	I0924 19:46:41.064904   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:46:41.064927   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:41.064938   69904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:46:41.064957   69904 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.116 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-093771 NodeName:default-k8s-diff-port-093771 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:46:41.065089   69904 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.116
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-093771"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:46:41.065142   69904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:46:41.075518   69904 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:46:41.075604   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:46:41.084461   69904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0924 19:46:41.099383   69904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:46:41.114093   69904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0924 19:46:41.129287   69904 ssh_runner.go:195] Run: grep 192.168.50.116	control-plane.minikube.internal$ /etc/hosts
	I0924 19:46:41.132690   69904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:41.144620   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:41.258218   69904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:41.279350   69904 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771 for IP: 192.168.50.116
	I0924 19:46:41.279373   69904 certs.go:194] generating shared ca certs ...
	I0924 19:46:41.279393   69904 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:41.279592   69904 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:46:41.279668   69904 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:46:41.279685   69904 certs.go:256] generating profile certs ...
	I0924 19:46:41.279806   69904 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/client.key
	I0924 19:46:41.279905   69904 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.key.ee3880b0
	I0924 19:46:41.279968   69904 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.key
	I0924 19:46:41.280139   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:46:41.280176   69904 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:46:41.280189   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:46:41.280248   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:46:41.280292   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:46:41.280324   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:46:41.280379   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:41.281191   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:46:41.319225   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:46:41.343585   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:46:41.373080   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:46:41.405007   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0924 19:46:41.434543   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 19:46:41.458642   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:46:41.480848   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:46:41.502778   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:46:41.525217   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:46:41.548290   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:46:41.572569   69904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:46:41.591631   69904 ssh_runner.go:195] Run: openssl version
	I0924 19:46:41.598407   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:46:41.611310   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.616372   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.616425   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.621818   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:46:41.631262   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:46:41.641685   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.645781   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.645827   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.651168   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:46:41.664296   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:46:41.677001   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.681609   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.681650   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.686733   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:46:41.696235   69904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:46:41.700431   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:46:41.705979   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:46:41.711363   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:46:41.716911   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:46:41.722137   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:46:41.727363   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:46:41.732646   69904 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-093771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:46:41.732750   69904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:46:41.732791   69904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:41.766796   69904 cri.go:89] found id: ""
	I0924 19:46:41.766883   69904 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:46:41.776244   69904 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:46:41.776268   69904 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:46:41.776316   69904 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:46:41.786769   69904 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:46:41.787665   69904 kubeconfig.go:125] found "default-k8s-diff-port-093771" server: "https://192.168.50.116:8444"
	I0924 19:46:41.789591   69904 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:46:41.798561   69904 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.116
	I0924 19:46:41.798596   69904 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:46:41.798617   69904 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:46:41.798661   69904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:41.839392   69904 cri.go:89] found id: ""
	I0924 19:46:41.839469   69904 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:46:41.854464   69904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:46:41.863006   69904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:46:41.863023   69904 kubeadm.go:157] found existing configuration files:
	
	I0924 19:46:41.863082   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 19:46:41.871086   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:46:41.871138   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:46:41.880003   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 19:46:41.890123   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:46:41.890171   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:46:41.901736   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 19:46:41.909613   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:46:41.909670   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:46:41.921595   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 19:46:41.932589   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:46:41.932654   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:46:41.943735   69904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:46:41.952064   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:42.065934   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:42.948388   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.183687   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.264336   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.353897   69904 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:46:43.353979   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:43.854330   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:42.514864   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:46:42.533161   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:46:42.533181   69576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:46:42.539876   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:46:42.564401   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:46:42.564427   69576 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:46:42.598218   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:46:42.598243   69576 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:46:42.619014   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:46:44.487219   69576 node_ready.go:53] node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:45.026145   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.511239735s)
	I0924 19:46:45.026401   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.026416   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.026281   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.486373933s)
	I0924 19:46:45.026501   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.026514   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030099   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030118   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030151   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030162   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030166   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030171   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.030175   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030179   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030184   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.030192   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030494   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030544   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030562   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030634   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030662   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.041980   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.042007   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.042336   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.042391   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.042424   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.120637   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.501525022s)
	I0924 19:46:45.120699   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.120714   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.121114   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.121173   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.121197   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.121222   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.122653   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.122671   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.122683   69576 addons.go:475] Verifying addon metrics-server=true in "no-preload-965745"
	I0924 19:46:45.124698   69576 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 19:46:42.011562   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:42.011963   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:42.011986   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:42.011932   71118 retry.go:31] will retry after 1.827996528s: waiting for machine to come up
	I0924 19:46:43.841529   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:43.842075   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:43.842106   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:43.842030   71118 retry.go:31] will retry after 2.224896366s: waiting for machine to come up
	I0924 19:46:46.068290   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:46.068761   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:46.068784   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:46.068736   71118 retry.go:31] will retry after 2.630690322s: waiting for machine to come up
	I0924 19:46:45.126030   69576 addons.go:510] duration metric: took 2.930987175s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 19:46:46.930203   69576 node_ready.go:53] node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:44.354690   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:44.854316   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:45.354861   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:45.370596   69904 api_server.go:72] duration metric: took 2.016695722s to wait for apiserver process to appear ...
	I0924 19:46:45.370626   69904 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:46:45.370655   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:45.371182   69904 api_server.go:269] stopped: https://192.168.50.116:8444/healthz: Get "https://192.168.50.116:8444/healthz": dial tcp 192.168.50.116:8444: connect: connection refused
	I0924 19:46:45.870725   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.042928   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:48.042957   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:48.042985   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.054732   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:48.054759   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:48.371230   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.381025   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:48.381058   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:48.871669   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.878407   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:48.878440   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:49.371018   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:49.375917   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 200:
	ok
	I0924 19:46:49.383318   69904 api_server.go:141] control plane version: v1.31.1
	I0924 19:46:49.383352   69904 api_server.go:131] duration metric: took 4.012718503s to wait for apiserver health ...
	I0924 19:46:49.383362   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:46:49.383368   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:49.385326   69904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:46:48.700927   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:48.701338   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:48.701367   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:48.701291   71118 retry.go:31] will retry after 3.546152526s: waiting for machine to come up
	I0924 19:46:48.934204   69576 node_ready.go:49] node "no-preload-965745" has status "Ready":"True"
	I0924 19:46:48.934238   69576 node_ready.go:38] duration metric: took 6.508559983s for node "no-preload-965745" to be "Ready" ...
	I0924 19:46:48.934250   69576 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:48.941949   69576 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:48.947063   69576 pod_ready.go:93] pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:48.947094   69576 pod_ready.go:82] duration metric: took 5.112983ms for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:48.947106   69576 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.953349   69576 pod_ready.go:103] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:53.519204   69408 start.go:364] duration metric: took 49.813943111s to acquireMachinesLock for "embed-certs-311319"
	I0924 19:46:53.519255   69408 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:53.519264   69408 fix.go:54] fixHost starting: 
	I0924 19:46:53.519644   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:53.519688   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:53.536327   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0924 19:46:53.536874   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:53.537424   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:46:53.537449   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:53.537804   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:53.538009   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:46:53.538172   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:46:53.539842   69408 fix.go:112] recreateIfNeeded on embed-certs-311319: state=Stopped err=<nil>
	I0924 19:46:53.539866   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	W0924 19:46:53.540003   69408 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:53.541719   69408 out.go:177] * Restarting existing kvm2 VM for "embed-certs-311319" ...
	I0924 19:46:49.386740   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:46:49.398816   69904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:46:49.416805   69904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:46:49.428112   69904 system_pods.go:59] 8 kube-system pods found
	I0924 19:46:49.428153   69904 system_pods.go:61] "coredns-7c65d6cfc9-h4nm8" [621c3ebb-1eb3-47a4-ba87-68e9caa2f3f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:46:49.428175   69904 system_pods.go:61] "etcd-default-k8s-diff-port-093771" [4251f310-2a54-4473-91ba-0aa57247a8e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:46:49.428196   69904 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-093771" [13840d0f-dca8-4b9e-876f-e664bd2ec6e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:46:49.428210   69904 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-093771" [30bbbd4d-8609-47fd-9a9f-373a5b63d785] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:46:49.428220   69904 system_pods.go:61] "kube-proxy-4gx4g" [de627472-1155-4ce3-b910-15657e93988e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 19:46:49.428232   69904 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-093771" [b1edae56-d98a-4fc8-8a99-c6e27f485c91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:46:49.428244   69904 system_pods.go:61] "metrics-server-6867b74b74-rgcll" [11de5d03-9c99-4536-9cfd-b33fe2e11fae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:46:49.428256   69904 system_pods.go:61] "storage-provisioner" [3c29f75e-1570-42cd-8430-284527878197] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 19:46:49.428269   69904 system_pods.go:74] duration metric: took 11.441258ms to wait for pod list to return data ...
	I0924 19:46:49.428288   69904 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:46:49.432173   69904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:46:49.432198   69904 node_conditions.go:123] node cpu capacity is 2
	I0924 19:46:49.432207   69904 node_conditions.go:105] duration metric: took 3.913746ms to run NodePressure ...
	I0924 19:46:49.432221   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:49.707599   69904 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:46:49.712788   69904 kubeadm.go:739] kubelet initialised
	I0924 19:46:49.712808   69904 kubeadm.go:740] duration metric: took 5.18017ms waiting for restarted kubelet to initialise ...
	I0924 19:46:49.712816   69904 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:49.725245   69904 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.731600   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.731624   69904 pod_ready.go:82] duration metric: took 6.354998ms for pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.731633   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.731639   69904 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.737044   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.737067   69904 pod_ready.go:82] duration metric: took 5.419976ms for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.737083   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.737092   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.742151   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.742170   69904 pod_ready.go:82] duration metric: took 5.067452ms for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.742180   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.742185   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.823203   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.823237   69904 pod_ready.go:82] duration metric: took 81.044673ms for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.823253   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.823262   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4gx4g" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.220171   69904 pod_ready.go:93] pod "kube-proxy-4gx4g" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:50.220207   69904 pod_ready.go:82] duration metric: took 396.929531ms for pod "kube-proxy-4gx4g" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.220219   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:52.227683   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:52.249370   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.249921   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has current primary IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.249953   70152 main.go:141] libmachine: (old-k8s-version-510301) Found IP for machine: 192.168.72.81
	I0924 19:46:52.249967   70152 main.go:141] libmachine: (old-k8s-version-510301) Reserving static IP address...
	I0924 19:46:52.250395   70152 main.go:141] libmachine: (old-k8s-version-510301) Reserved static IP address: 192.168.72.81
	I0924 19:46:52.250438   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "old-k8s-version-510301", mac: "52:54:00:72:11:f0", ip: "192.168.72.81"} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.250453   70152 main.go:141] libmachine: (old-k8s-version-510301) Waiting for SSH to be available...
	I0924 19:46:52.250479   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | skip adding static IP to network mk-old-k8s-version-510301 - found existing host DHCP lease matching {name: "old-k8s-version-510301", mac: "52:54:00:72:11:f0", ip: "192.168.72.81"}
	I0924 19:46:52.250492   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Getting to WaitForSSH function...
	I0924 19:46:52.252807   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.253148   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.253176   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.253278   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using SSH client type: external
	I0924 19:46:52.253300   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa (-rw-------)
	I0924 19:46:52.253332   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:52.253345   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | About to run SSH command:
	I0924 19:46:52.253354   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | exit 0
	I0924 19:46:52.378625   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:52.379067   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetConfigRaw
	I0924 19:46:52.379793   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:52.382222   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.382618   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.382647   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.382925   70152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json ...
	I0924 19:46:52.383148   70152 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:52.383174   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:52.383374   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.385984   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.386434   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.386460   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.386614   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.386788   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.387002   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.387167   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.387396   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.387632   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.387645   70152 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:52.503003   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:52.503033   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.503320   70152 buildroot.go:166] provisioning hostname "old-k8s-version-510301"
	I0924 19:46:52.503344   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.503630   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.506502   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.506817   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.506858   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.507028   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.507216   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.507394   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.507584   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.507792   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.508016   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.508034   70152 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-510301 && echo "old-k8s-version-510301" | sudo tee /etc/hostname
	I0924 19:46:52.634014   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-510301
	
	I0924 19:46:52.634040   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.636807   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.637156   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.637186   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.637331   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.637528   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.637721   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.637866   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.638016   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.638228   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.638252   70152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-510301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-510301/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-510301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:52.754583   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:52.754613   70152 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:52.754645   70152 buildroot.go:174] setting up certificates
	I0924 19:46:52.754653   70152 provision.go:84] configureAuth start
	I0924 19:46:52.754664   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.754975   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:52.757674   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.758024   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.758047   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.758158   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.760405   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.760722   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.760751   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.760869   70152 provision.go:143] copyHostCerts
	I0924 19:46:52.760928   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:52.760942   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:52.761009   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:52.761125   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:52.761141   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:52.761180   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:52.761262   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:52.761274   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:52.761301   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:52.761375   70152 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-510301 san=[127.0.0.1 192.168.72.81 localhost minikube old-k8s-version-510301]
	I0924 19:46:52.906522   70152 provision.go:177] copyRemoteCerts
	I0924 19:46:52.906586   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:52.906606   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.909264   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.909580   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.909622   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.909777   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.909960   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.910206   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.910313   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:52.997129   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:53.020405   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 19:46:53.042194   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:53.063422   70152 provision.go:87] duration metric: took 308.753857ms to configureAuth
	I0924 19:46:53.063448   70152 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:53.063662   70152 config.go:182] Loaded profile config "old-k8s-version-510301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:46:53.063752   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.066435   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.066850   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.066877   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.067076   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.067247   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.067382   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.067546   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.067749   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:53.067935   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:53.067958   70152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:53.288436   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:53.288463   70152 machine.go:96] duration metric: took 905.298763ms to provisionDockerMachine
	I0924 19:46:53.288476   70152 start.go:293] postStartSetup for "old-k8s-version-510301" (driver="kvm2")
	I0924 19:46:53.288486   70152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:53.288513   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.288841   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:53.288869   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.291363   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.291643   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.291660   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.291867   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.292054   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.292210   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.292337   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.372984   70152 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:53.377049   70152 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:53.377072   70152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:53.377158   70152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:53.377250   70152 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:53.377339   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:53.385950   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:53.408609   70152 start.go:296] duration metric: took 120.112789ms for postStartSetup
	I0924 19:46:53.408654   70152 fix.go:56] duration metric: took 17.584988201s for fixHost
	I0924 19:46:53.408677   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.411723   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.412100   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.412124   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.412309   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.412544   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.412752   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.412892   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.413075   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:53.413260   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:53.413272   70152 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:53.519060   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207213.488062061
	
	I0924 19:46:53.519081   70152 fix.go:216] guest clock: 1727207213.488062061
	I0924 19:46:53.519090   70152 fix.go:229] Guest: 2024-09-24 19:46:53.488062061 +0000 UTC Remote: 2024-09-24 19:46:53.408658589 +0000 UTC m=+246.951196346 (delta=79.403472ms)
	I0924 19:46:53.519120   70152 fix.go:200] guest clock delta is within tolerance: 79.403472ms
	I0924 19:46:53.519127   70152 start.go:83] releasing machines lock for "old-k8s-version-510301", held for 17.695500754s
	I0924 19:46:53.519158   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.519439   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:53.522059   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.522454   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.522483   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.522639   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523144   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523344   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523432   70152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:53.523470   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.523577   70152 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:53.523614   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.526336   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.526804   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.526845   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.526874   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.527024   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.527216   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.527354   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.527358   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.527382   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.527484   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.527599   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.527742   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.527925   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.528073   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.625956   70152 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:53.631927   70152 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:53.769800   70152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:53.776028   70152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:53.776076   70152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:53.792442   70152 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:53.792476   70152 start.go:495] detecting cgroup driver to use...
	I0924 19:46:53.792558   70152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:53.813239   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:53.827951   70152 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:53.828011   70152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:53.840962   70152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:53.853498   70152 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:53.957380   70152 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:54.123019   70152 docker.go:233] disabling docker service ...
	I0924 19:46:54.123087   70152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:54.138033   70152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:54.153414   70152 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:54.286761   70152 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:54.411013   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:54.432184   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:54.449924   70152 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 19:46:54.450001   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.459689   70152 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:54.459745   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.469555   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.480875   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.490860   70152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:54.503933   70152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:54.513383   70152 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:54.513444   70152 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:54.527180   70152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:54.539778   70152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:54.676320   70152 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:54.774914   70152 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:54.775027   70152 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:54.780383   70152 start.go:563] Will wait 60s for crictl version
	I0924 19:46:54.780457   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:54.785066   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:54.825711   70152 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:54.825792   70152 ssh_runner.go:195] Run: crio --version
	I0924 19:46:54.861643   70152 ssh_runner.go:195] Run: crio --version
	I0924 19:46:54.905425   70152 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 19:46:53.542904   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Start
	I0924 19:46:53.543092   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring networks are active...
	I0924 19:46:53.543799   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring network default is active
	I0924 19:46:53.544155   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring network mk-embed-certs-311319 is active
	I0924 19:46:53.544586   69408 main.go:141] libmachine: (embed-certs-311319) Getting domain xml...
	I0924 19:46:53.545860   69408 main.go:141] libmachine: (embed-certs-311319) Creating domain...
	I0924 19:46:54.960285   69408 main.go:141] libmachine: (embed-certs-311319) Waiting to get IP...
	I0924 19:46:54.961237   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:54.961738   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:54.961831   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:54.961724   71297 retry.go:31] will retry after 193.067485ms: waiting for machine to come up
	I0924 19:46:55.156270   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:55.156850   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:55.156881   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:55.156806   71297 retry.go:31] will retry after 374.820173ms: waiting for machine to come up
	I0924 19:46:55.533606   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:55.534201   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:55.534235   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:55.534160   71297 retry.go:31] will retry after 469.993304ms: waiting for machine to come up
	I0924 19:46:56.005971   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:56.006513   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:56.006544   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:56.006471   71297 retry.go:31] will retry after 418.910837ms: waiting for machine to come up
	I0924 19:46:54.906585   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:54.909353   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:54.909736   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:54.909766   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:54.909970   70152 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:54.915290   70152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:54.927316   70152 kubeadm.go:883] updating cluster {Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:54.927427   70152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:46:54.927465   70152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:54.971020   70152 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:46:54.971090   70152 ssh_runner.go:195] Run: which lz4
	I0924 19:46:54.975775   70152 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:46:54.979807   70152 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:46:54.979865   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 19:46:56.372682   70152 crio.go:462] duration metric: took 1.396951861s to copy over tarball
	I0924 19:46:56.372750   70152 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:46:53.453495   69576 pod_ready.go:103] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:53.954341   69576 pod_ready.go:93] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.954366   69576 pod_ready.go:82] duration metric: took 5.007252183s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.954375   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.959461   69576 pod_ready.go:93] pod "kube-apiserver-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.959485   69576 pod_ready.go:82] duration metric: took 5.103045ms for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.959498   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.964289   69576 pod_ready.go:93] pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.964316   69576 pod_ready.go:82] duration metric: took 4.809404ms for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.964329   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.968263   69576 pod_ready.go:93] pod "kube-proxy-ng8vf" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.968286   69576 pod_ready.go:82] duration metric: took 3.947497ms for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.968296   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.971899   69576 pod_ready.go:93] pod "kube-scheduler-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.971916   69576 pod_ready.go:82] duration metric: took 3.613023ms for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.971924   69576 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:55.980226   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:54.728787   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:57.226216   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:59.227939   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:56.427214   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:56.427600   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:56.427638   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:56.427551   71297 retry.go:31] will retry after 631.22309ms: waiting for machine to come up
	I0924 19:46:57.059888   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:57.060269   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:57.060299   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:57.060219   71297 retry.go:31] will retry after 833.784855ms: waiting for machine to come up
	I0924 19:46:57.895228   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:57.895693   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:57.895711   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:57.895641   71297 retry.go:31] will retry after 1.12615573s: waiting for machine to come up
	I0924 19:46:59.023342   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:59.023824   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:59.023853   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:59.023770   71297 retry.go:31] will retry after 1.020351559s: waiting for machine to come up
	I0924 19:47:00.045373   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:00.045833   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:00.045860   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:00.045779   71297 retry.go:31] will retry after 1.127245815s: waiting for machine to come up
	I0924 19:46:59.298055   70152 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.925272101s)
	I0924 19:46:59.298082   70152 crio.go:469] duration metric: took 2.925375511s to extract the tarball
	I0924 19:46:59.298091   70152 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:46:59.340896   70152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:59.374335   70152 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:46:59.374358   70152 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 19:46:59.374431   70152 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:59.374463   70152 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.374468   70152 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.374489   70152 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.374514   70152 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.374434   70152 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.374582   70152 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.374624   70152 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 19:46:59.375796   70152 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.375857   70152 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.375925   70152 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.375869   70152 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.376062   70152 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.376154   70152 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:59.376357   70152 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.376419   70152 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 19:46:59.521289   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.525037   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.526549   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.536791   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.545312   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.553847   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 19:46:59.558387   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.611119   70152 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 19:46:59.611167   70152 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.611219   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.659190   70152 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 19:46:59.659234   70152 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.659282   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.660489   70152 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 19:46:59.660522   70152 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 19:46:59.660529   70152 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.660558   70152 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.660591   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.660596   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.686686   70152 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 19:46:59.686728   70152 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.686777   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698274   70152 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 19:46:59.698313   70152 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 19:46:59.698366   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698379   70152 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 19:46:59.698410   70152 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.698449   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698451   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.698462   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.698523   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.698527   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.698573   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.795169   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.795179   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.795201   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.805639   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.817474   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.817485   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.817538   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:46:59.917772   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.921025   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.929651   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.955330   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.955344   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.969966   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:46:59.969966   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:47:00.058059   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 19:47:00.058134   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 19:47:00.058178   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 19:47:00.078489   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 19:47:00.078543   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 19:47:00.091137   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:47:00.091212   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:47:00.132385   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 19:47:00.140154   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 19:47:00.328511   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:47:00.468550   70152 cache_images.go:92] duration metric: took 1.094174976s to LoadCachedImages
	W0924 19:47:00.468674   70152 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0924 19:47:00.468693   70152 kubeadm.go:934] updating node { 192.168.72.81 8443 v1.20.0 crio true true} ...
	I0924 19:47:00.468831   70152 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-510301 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:47:00.468918   70152 ssh_runner.go:195] Run: crio config
	I0924 19:47:00.521799   70152 cni.go:84] Creating CNI manager for ""
	I0924 19:47:00.521826   70152 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:00.521836   70152 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:47:00.521858   70152 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-510301 NodeName:old-k8s-version-510301 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 19:47:00.521992   70152 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-510301"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:47:00.522051   70152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 19:47:00.534799   70152 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:47:00.534888   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:47:00.546863   70152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0924 19:47:00.565623   70152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:47:00.583242   70152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0924 19:47:00.600113   70152 ssh_runner.go:195] Run: grep 192.168.72.81	control-plane.minikube.internal$ /etc/hosts
	I0924 19:47:00.603653   70152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:00.618699   70152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:00.746348   70152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:47:00.767201   70152 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301 for IP: 192.168.72.81
	I0924 19:47:00.767228   70152 certs.go:194] generating shared ca certs ...
	I0924 19:47:00.767246   70152 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:00.767418   70152 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:47:00.767468   70152 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:47:00.767482   70152 certs.go:256] generating profile certs ...
	I0924 19:47:00.767607   70152 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/client.key
	I0924 19:47:00.767675   70152 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key.32de9897
	I0924 19:47:00.767726   70152 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key
	I0924 19:47:00.767866   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:47:00.767903   70152 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:47:00.767916   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:47:00.767950   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:47:00.767980   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:47:00.768013   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:47:00.768064   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:00.768651   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:47:00.819295   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:47:00.858368   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:47:00.903694   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:47:00.930441   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 19:47:00.960346   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 19:47:00.988938   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:47:01.014165   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:47:01.038384   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:47:01.061430   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:47:01.083761   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:47:01.105996   70152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:47:01.121529   70152 ssh_runner.go:195] Run: openssl version
	I0924 19:47:01.127294   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:47:01.139547   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.143897   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.143956   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.149555   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:47:01.159823   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:47:01.170730   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.175500   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.175635   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.181445   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:47:01.194810   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:47:01.205193   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.209256   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.209316   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.214946   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:47:01.225368   70152 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:47:01.229833   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:47:01.235652   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:47:01.241158   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:47:01.248213   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:47:01.255001   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:47:01.262990   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:47:01.270069   70152 kubeadm.go:392] StartCluster: {Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:47:01.270166   70152 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:47:01.270211   70152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:01.310648   70152 cri.go:89] found id: ""
	I0924 19:47:01.310759   70152 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:47:01.321111   70152 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:47:01.321133   70152 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:47:01.321182   70152 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:47:01.330754   70152 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:47:01.331880   70152 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-510301" does not appear in /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:47:01.332435   70152 kubeconfig.go:62] /home/jenkins/minikube-integration/19700-3751/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-510301" cluster setting kubeconfig missing "old-k8s-version-510301" context setting]
	I0924 19:47:01.333336   70152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:01.390049   70152 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:47:01.402246   70152 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.81
	I0924 19:47:01.402281   70152 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:47:01.402295   70152 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:47:01.402346   70152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:01.443778   70152 cri.go:89] found id: ""
	I0924 19:47:01.443851   70152 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:47:01.459836   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:47:01.469392   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:47:01.469414   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:47:01.469454   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:47:01.480329   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:47:01.480402   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:47:01.489799   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:46:58.478282   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:00.478523   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:02.478757   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:01.400039   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:02.984025   69904 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:02.984060   69904 pod_ready.go:82] duration metric: took 12.763830222s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:02.984074   69904 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:01.175244   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:01.175766   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:01.175794   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:01.175728   71297 retry.go:31] will retry after 2.109444702s: waiting for machine to come up
	I0924 19:47:03.288172   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:03.288747   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:03.288815   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:03.288726   71297 retry.go:31] will retry after 1.856538316s: waiting for machine to come up
	I0924 19:47:05.147261   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:05.147676   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:05.147705   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:05.147631   71297 retry.go:31] will retry after 3.46026185s: waiting for machine to come up
	I0924 19:47:01.499967   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:47:01.500023   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:47:01.508842   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:47:01.517564   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:47:01.517620   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:47:01.527204   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:47:01.536656   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:47:01.536718   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:47:01.546282   70152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:47:01.555548   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:01.755130   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.379331   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.601177   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.739476   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.829258   70152 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:47:02.829347   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:03.330254   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:03.830452   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.329738   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.829469   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:05.329754   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:05.830117   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:06.329834   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.978616   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:07.478201   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:04.990988   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:07.489888   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:08.610127   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:08.610582   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:08.610609   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:08.610530   71297 retry.go:31] will retry after 3.91954304s: waiting for machine to come up
	I0924 19:47:06.830043   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:07.330209   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:07.830432   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:08.329603   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:08.829525   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.330455   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.830130   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:10.329475   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:10.829474   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:11.330269   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.977113   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:11.977305   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:09.490038   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:11.490626   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:13.990603   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:12.534647   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.535213   69408 main.go:141] libmachine: (embed-certs-311319) Found IP for machine: 192.168.61.21
	I0924 19:47:12.535249   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has current primary IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.535259   69408 main.go:141] libmachine: (embed-certs-311319) Reserving static IP address...
	I0924 19:47:12.535700   69408 main.go:141] libmachine: (embed-certs-311319) Reserved static IP address: 192.168.61.21
	I0924 19:47:12.535744   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "embed-certs-311319", mac: "52:54:00:2d:97:73", ip: "192.168.61.21"} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.535759   69408 main.go:141] libmachine: (embed-certs-311319) Waiting for SSH to be available...
	I0924 19:47:12.535820   69408 main.go:141] libmachine: (embed-certs-311319) DBG | skip adding static IP to network mk-embed-certs-311319 - found existing host DHCP lease matching {name: "embed-certs-311319", mac: "52:54:00:2d:97:73", ip: "192.168.61.21"}
	I0924 19:47:12.535851   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Getting to WaitForSSH function...
	I0924 19:47:12.538011   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.538313   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.538336   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.538473   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Using SSH client type: external
	I0924 19:47:12.538500   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa (-rw-------)
	I0924 19:47:12.538538   69408 main.go:141] libmachine: (embed-certs-311319) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:47:12.538558   69408 main.go:141] libmachine: (embed-certs-311319) DBG | About to run SSH command:
	I0924 19:47:12.538634   69408 main.go:141] libmachine: (embed-certs-311319) DBG | exit 0
	I0924 19:47:12.662787   69408 main.go:141] libmachine: (embed-certs-311319) DBG | SSH cmd err, output: <nil>: 
	I0924 19:47:12.663130   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetConfigRaw
	I0924 19:47:12.663829   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:12.666266   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.666707   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.666734   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.666985   69408 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/config.json ...
	I0924 19:47:12.667187   69408 machine.go:93] provisionDockerMachine start ...
	I0924 19:47:12.667205   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:12.667397   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.669695   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.670024   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.670056   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.670152   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.670297   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.670460   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.670624   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.670793   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.671018   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.671033   69408 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:47:12.766763   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:47:12.766797   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.767074   69408 buildroot.go:166] provisioning hostname "embed-certs-311319"
	I0924 19:47:12.767103   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.767285   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.770003   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.770519   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.770538   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.770705   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.770934   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.771119   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.771237   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.771408   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.771554   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.771565   69408 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-311319 && echo "embed-certs-311319" | sudo tee /etc/hostname
	I0924 19:47:12.879608   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-311319
	
	I0924 19:47:12.879636   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.882136   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.882424   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.882467   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.882663   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.882866   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.883075   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.883235   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.883416   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.883583   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.883599   69408 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-311319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-311319/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-311319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:47:12.987554   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:47:12.987586   69408 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:47:12.987608   69408 buildroot.go:174] setting up certificates
	I0924 19:47:12.987618   69408 provision.go:84] configureAuth start
	I0924 19:47:12.987630   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.987918   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:12.990946   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.991378   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.991399   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.991554   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.993829   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.994193   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.994222   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.994349   69408 provision.go:143] copyHostCerts
	I0924 19:47:12.994410   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:47:12.994420   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:47:12.994478   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:47:12.994576   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:47:12.994586   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:47:12.994609   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:47:12.994663   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:47:12.994670   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:47:12.994689   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:47:12.994734   69408 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.embed-certs-311319 san=[127.0.0.1 192.168.61.21 embed-certs-311319 localhost minikube]
	I0924 19:47:13.255351   69408 provision.go:177] copyRemoteCerts
	I0924 19:47:13.255425   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:47:13.255452   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.257888   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.258200   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.258229   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.258359   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.258567   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.258746   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.258895   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.337835   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:47:13.360866   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 19:47:13.382703   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 19:47:13.404887   69408 provision.go:87] duration metric: took 417.256101ms to configureAuth
	I0924 19:47:13.404918   69408 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:47:13.405088   69408 config.go:182] Loaded profile config "embed-certs-311319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:47:13.405156   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.407711   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.408005   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.408024   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.408215   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.408408   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.408558   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.408660   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.408798   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:13.408960   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:13.408975   69408 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:47:13.623776   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:47:13.623798   69408 machine.go:96] duration metric: took 956.599003ms to provisionDockerMachine
	I0924 19:47:13.623809   69408 start.go:293] postStartSetup for "embed-certs-311319" (driver="kvm2")
	I0924 19:47:13.623818   69408 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:47:13.623833   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.624139   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:47:13.624168   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.627101   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.627443   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.627463   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.627613   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.627790   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.627941   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.628087   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.705595   69408 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:47:13.709401   69408 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:47:13.709432   69408 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:47:13.709507   69408 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:47:13.709597   69408 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:47:13.709717   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:47:13.718508   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:13.741537   69408 start.go:296] duration metric: took 117.71568ms for postStartSetup
	I0924 19:47:13.741586   69408 fix.go:56] duration metric: took 20.222309525s for fixHost
	I0924 19:47:13.741609   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.743935   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.744298   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.744319   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.744478   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.744665   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.744833   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.744950   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.745099   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:13.745299   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:13.745310   69408 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:47:13.847189   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207233.821269327
	
	I0924 19:47:13.847206   69408 fix.go:216] guest clock: 1727207233.821269327
	I0924 19:47:13.847213   69408 fix.go:229] Guest: 2024-09-24 19:47:13.821269327 +0000 UTC Remote: 2024-09-24 19:47:13.741591139 +0000 UTC m=+352.627485562 (delta=79.678188ms)
	I0924 19:47:13.847230   69408 fix.go:200] guest clock delta is within tolerance: 79.678188ms
	I0924 19:47:13.847236   69408 start.go:83] releasing machines lock for "embed-certs-311319", held for 20.328002727s
	I0924 19:47:13.847252   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.847550   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:13.850207   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.850597   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.850624   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.850777   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851225   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851382   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851459   69408 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:47:13.851520   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.851583   69408 ssh_runner.go:195] Run: cat /version.json
	I0924 19:47:13.851606   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.854077   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854214   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854354   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.854378   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854508   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.854615   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.854646   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854666   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.854852   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.854855   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.855020   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.855030   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.855168   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.855279   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.927108   69408 ssh_runner.go:195] Run: systemctl --version
	I0924 19:47:13.948600   69408 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:47:14.091427   69408 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:47:14.097911   69408 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:47:14.097970   69408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:47:14.113345   69408 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:47:14.113367   69408 start.go:495] detecting cgroup driver to use...
	I0924 19:47:14.113418   69408 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:47:14.129953   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:47:14.143732   69408 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:47:14.143792   69408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:47:14.156986   69408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:47:14.170235   69408 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:47:14.280973   69408 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:47:14.431584   69408 docker.go:233] disabling docker service ...
	I0924 19:47:14.431652   69408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:47:14.447042   69408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:47:14.458811   69408 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:47:14.571325   69408 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:47:14.685951   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:47:14.698947   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:47:14.716153   69408 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:47:14.716210   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.725659   69408 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:47:14.725711   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.734814   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.744087   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.753666   69408 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:47:14.763166   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.772502   69408 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.787890   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.797483   69408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:47:14.805769   69408 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:47:14.805822   69408 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:47:14.817290   69408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:47:14.827023   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:14.954141   69408 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:47:15.033256   69408 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:47:15.033336   69408 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:47:15.038070   69408 start.go:563] Will wait 60s for crictl version
	I0924 19:47:15.038118   69408 ssh_runner.go:195] Run: which crictl
	I0924 19:47:15.041588   69408 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:47:15.081812   69408 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:47:15.081922   69408 ssh_runner.go:195] Run: crio --version
	I0924 19:47:15.108570   69408 ssh_runner.go:195] Run: crio --version
	I0924 19:47:15.137432   69408 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:47:15.138786   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:15.141328   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:15.141693   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:15.141723   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:15.141867   69408 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0924 19:47:15.145512   69408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:15.156995   69408 kubeadm.go:883] updating cluster {Name:embed-certs-311319 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:47:15.157095   69408 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:47:15.157142   69408 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:47:15.189861   69408 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:47:15.189919   69408 ssh_runner.go:195] Run: which lz4
	I0924 19:47:15.193364   69408 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:47:15.196961   69408 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:47:15.196986   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 19:47:11.830448   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:12.330373   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:12.830050   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.329571   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.829489   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:14.329728   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:14.829674   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:15.329673   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:15.829570   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:16.330102   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.978164   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:15.978363   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:15.990970   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:18.491272   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:16.371583   69408 crio.go:462] duration metric: took 1.178253814s to copy over tarball
	I0924 19:47:16.371663   69408 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:47:18.358246   69408 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.986557839s)
	I0924 19:47:18.358276   69408 crio.go:469] duration metric: took 1.986666343s to extract the tarball
	I0924 19:47:18.358285   69408 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:47:18.393855   69408 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:47:18.442985   69408 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 19:47:18.443011   69408 cache_images.go:84] Images are preloaded, skipping loading
	I0924 19:47:18.443020   69408 kubeadm.go:934] updating node { 192.168.61.21 8443 v1.31.1 crio true true} ...
	I0924 19:47:18.443144   69408 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-311319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:47:18.443225   69408 ssh_runner.go:195] Run: crio config
	I0924 19:47:18.495010   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:47:18.495034   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:18.495045   69408 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:47:18.495071   69408 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.21 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-311319 NodeName:embed-certs-311319 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:47:18.495201   69408 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-311319"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:47:18.495259   69408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:47:18.504758   69408 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:47:18.504837   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:47:18.513817   69408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 19:47:18.529890   69408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:47:18.545915   69408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0924 19:47:18.561627   69408 ssh_runner.go:195] Run: grep 192.168.61.21	control-plane.minikube.internal$ /etc/hosts
	I0924 19:47:18.565041   69408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:18.576059   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:18.686482   69408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:47:18.703044   69408 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319 for IP: 192.168.61.21
	I0924 19:47:18.703074   69408 certs.go:194] generating shared ca certs ...
	I0924 19:47:18.703095   69408 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:18.703278   69408 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:47:18.703317   69408 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:47:18.703327   69408 certs.go:256] generating profile certs ...
	I0924 19:47:18.703417   69408 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/client.key
	I0924 19:47:18.703477   69408 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.key.8f14491f
	I0924 19:47:18.703510   69408 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.key
	I0924 19:47:18.703649   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:47:18.703703   69408 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:47:18.703715   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:47:18.703740   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:47:18.703771   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:47:18.703803   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:47:18.703843   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:18.704668   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:47:18.731187   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:47:18.762416   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:47:18.793841   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:47:18.822091   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0924 19:47:18.854506   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 19:47:18.880416   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:47:18.903863   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 19:47:18.926078   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:47:18.947455   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:47:18.968237   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:47:18.990346   69408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:47:19.006286   69408 ssh_runner.go:195] Run: openssl version
	I0924 19:47:19.011968   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:47:19.021631   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.025859   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.025914   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.030999   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:47:19.041265   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:47:19.050994   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.054763   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.054810   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.059873   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:47:19.069694   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:47:19.079194   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.083185   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.083236   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.088369   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:47:19.098719   69408 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:47:19.102935   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:47:19.108364   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:47:19.113724   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:47:19.119556   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:47:19.125014   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:47:19.130466   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:47:19.135718   69408 kubeadm.go:392] StartCluster: {Name:embed-certs-311319 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:47:19.135786   69408 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:47:19.135826   69408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:19.171585   69408 cri.go:89] found id: ""
	I0924 19:47:19.171664   69408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:47:19.181296   69408 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:47:19.181315   69408 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:47:19.181363   69408 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:47:19.191113   69408 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:47:19.192148   69408 kubeconfig.go:125] found "embed-certs-311319" server: "https://192.168.61.21:8443"
	I0924 19:47:19.194115   69408 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:47:19.203274   69408 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.21
	I0924 19:47:19.203308   69408 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:47:19.203319   69408 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:47:19.203372   69408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:19.249594   69408 cri.go:89] found id: ""
	I0924 19:47:19.249678   69408 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:47:19.268296   69408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:47:19.277151   69408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:47:19.277169   69408 kubeadm.go:157] found existing configuration files:
	
	I0924 19:47:19.277206   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:47:19.285488   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:47:19.285550   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:47:19.294995   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:47:19.303613   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:47:19.303669   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:47:19.312919   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:47:19.321717   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:47:19.321778   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:47:19.330321   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:47:19.342441   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:47:19.342497   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:47:19.352505   69408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:47:19.361457   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:19.463310   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.242073   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.431443   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.500079   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.575802   69408 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:47:20.575904   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:21.076353   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:16.829867   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.329440   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.830132   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:18.329512   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:18.829524   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:19.329716   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:19.829496   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:20.329702   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:20.830155   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:21.330292   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.979442   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:20.478202   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:22.478336   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:20.491568   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:22.991057   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:21.576940   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.076696   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.576235   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.594920   69408 api_server.go:72] duration metric: took 2.019101558s to wait for apiserver process to appear ...
	I0924 19:47:22.594944   69408 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:47:22.594965   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:22.595379   69408 api_server.go:269] stopped: https://192.168.61.21:8443/healthz: Get "https://192.168.61.21:8443/healthz": dial tcp 192.168.61.21:8443: connect: connection refused
	I0924 19:47:23.095005   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.467947   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:47:25.467974   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:47:25.467988   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.515819   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:47:25.515851   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:47:25.596001   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.602276   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:25.602314   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:26.095918   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:26.100666   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:26.100698   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:21.829987   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.329630   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.830041   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:23.330430   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:23.829696   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:24.329494   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:24.830212   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:25.330402   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:25.829827   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:26.329541   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:26.595784   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:26.601821   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:26.601861   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:27.095137   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:27.099164   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 200:
	ok
	I0924 19:47:27.106625   69408 api_server.go:141] control plane version: v1.31.1
	I0924 19:47:27.106652   69408 api_server.go:131] duration metric: took 4.511701512s to wait for apiserver health ...
	I0924 19:47:27.106661   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:47:27.106668   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:27.108430   69408 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:47:24.479088   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:26.978509   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:25.490325   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:27.990308   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:27.109830   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:47:27.119442   69408 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:47:27.139119   69408 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:47:27.150029   69408 system_pods.go:59] 8 kube-system pods found
	I0924 19:47:27.150060   69408 system_pods.go:61] "coredns-7c65d6cfc9-wwzps" [5d53dda1-bd41-40f4-8e01-e3808a6e17e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:47:27.150067   69408 system_pods.go:61] "etcd-embed-certs-311319" [899d3105-b565-4c9c-8b8e-fa524ba8bee8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:47:27.150076   69408 system_pods.go:61] "kube-apiserver-embed-certs-311319" [45909a95-dafd-436a-b1c9-4b16a7cb6ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:47:27.150083   69408 system_pods.go:61] "kube-controller-manager-embed-certs-311319" [e122c12d-8ad6-472d-9339-a9751a6108a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:47:27.150089   69408 system_pods.go:61] "kube-proxy-qk749" [ae8c6989-5de4-41bd-9098-1924532b7ff8] Running
	I0924 19:47:27.150094   69408 system_pods.go:61] "kube-scheduler-embed-certs-311319" [2f7427ff-479c-4f36-b27f-cfbf76e26201] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:47:27.150103   69408 system_pods.go:61] "metrics-server-6867b74b74-jfrhm" [b0e8ee4e-c2c6-4379-85ca-805cd3ce6371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:47:27.150107   69408 system_pods.go:61] "storage-provisioner" [b61b6e53-23ad-4cee-8eaa-8195dc6e67b8] Running
	I0924 19:47:27.150115   69408 system_pods.go:74] duration metric: took 10.980516ms to wait for pod list to return data ...
	I0924 19:47:27.150123   69408 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:47:27.154040   69408 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:47:27.154061   69408 node_conditions.go:123] node cpu capacity is 2
	I0924 19:47:27.154070   69408 node_conditions.go:105] duration metric: took 3.94208ms to run NodePressure ...
	I0924 19:47:27.154083   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:27.413841   69408 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:47:27.419186   69408 kubeadm.go:739] kubelet initialised
	I0924 19:47:27.419208   69408 kubeadm.go:740] duration metric: took 5.345194ms waiting for restarted kubelet to initialise ...
	I0924 19:47:27.419217   69408 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:47:27.424725   69408 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.429510   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.429529   69408 pod_ready.go:82] duration metric: took 4.780829ms for pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.429537   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.429542   69408 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.434176   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "etcd-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.434200   69408 pod_ready.go:82] duration metric: took 4.647781ms for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.434211   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "etcd-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.434218   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.438323   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.438352   69408 pod_ready.go:82] duration metric: took 4.121619ms for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.438365   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.438377   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.543006   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.543032   69408 pod_ready.go:82] duration metric: took 104.641326ms for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.543046   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.543053   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qk749" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.942331   69408 pod_ready.go:93] pod "kube-proxy-qk749" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:27.942351   69408 pod_ready.go:82] duration metric: took 399.288777ms for pod "kube-proxy-qk749" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.942360   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:29.955819   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:26.830122   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:27.329632   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:27.829858   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:28.329762   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:28.829476   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.330221   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.829642   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:30.329491   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:30.830098   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:31.329499   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.479174   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:31.979161   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:30.490043   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:32.490237   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:32.447718   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:34.948011   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:35.948500   69408 pod_ready.go:93] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:35.948525   69408 pod_ready.go:82] duration metric: took 8.006158098s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:35.948534   69408 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:31.830201   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:32.330017   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:32.829654   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:33.329718   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:33.830007   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.329683   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.829441   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:35.329848   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:35.829899   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:36.330437   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.478344   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.979370   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:34.490525   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.493495   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:38.990185   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:37.955025   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:39.958725   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.830372   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:37.330124   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:37.829745   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:38.329476   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:38.830138   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.329657   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.829850   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:40.330083   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:40.829903   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:41.329650   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.478317   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:41.978220   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:40.990288   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:42.990812   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:42.455130   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:44.954001   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:41.829413   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:42.329658   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:42.829718   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:43.330413   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:43.830374   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.329633   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.829479   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:45.330059   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:45.829818   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:46.330216   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.478335   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.977745   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:45.489604   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:47.490196   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.954193   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:48.955025   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.830337   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:47.330269   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:47.829573   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:48.329440   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:48.829923   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.329742   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.829771   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:50.329793   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:50.829379   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:51.329385   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.477310   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.977800   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:49.990388   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:52.490087   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.453967   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:53.454464   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:55.454863   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.829989   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:52.329456   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:52.830395   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:53.330348   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:53.829385   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.329667   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.830290   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:55.330430   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:55.829909   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:56.330041   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.477481   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.978407   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:54.490209   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.989867   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:58.990813   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:57.954303   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:00.454466   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.829842   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:57.329904   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:57.829402   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:58.329848   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:58.830403   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.330062   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.829904   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:00.329651   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:00.829451   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:01.330427   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.479270   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.978099   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.490292   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:03.490598   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:02.955021   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:05.455302   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.830104   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:02.330085   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:02.830241   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:02.830313   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:02.863389   70152 cri.go:89] found id: ""
	I0924 19:48:02.863421   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.863432   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:02.863440   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:02.863501   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:02.903587   70152 cri.go:89] found id: ""
	I0924 19:48:02.903615   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.903627   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:02.903634   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:02.903691   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:02.936090   70152 cri.go:89] found id: ""
	I0924 19:48:02.936117   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.936132   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:02.936138   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:02.936197   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:02.970010   70152 cri.go:89] found id: ""
	I0924 19:48:02.970034   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.970042   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:02.970047   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:02.970094   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:03.005123   70152 cri.go:89] found id: ""
	I0924 19:48:03.005146   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.005156   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:03.005164   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:03.005224   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:03.037142   70152 cri.go:89] found id: ""
	I0924 19:48:03.037185   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.037214   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:03.037223   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:03.037289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:03.071574   70152 cri.go:89] found id: ""
	I0924 19:48:03.071605   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.071616   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:03.071644   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:03.071710   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:03.101682   70152 cri.go:89] found id: ""
	I0924 19:48:03.101710   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.101718   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:03.101727   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:03.101737   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:03.145955   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:03.145982   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:03.194495   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:03.194531   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:03.207309   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:03.207344   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:03.318709   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:03.318736   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:03.318751   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:05.897472   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:05.910569   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:05.910633   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:05.972008   70152 cri.go:89] found id: ""
	I0924 19:48:05.972047   70152 logs.go:276] 0 containers: []
	W0924 19:48:05.972059   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:05.972066   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:05.972128   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:06.021928   70152 cri.go:89] found id: ""
	I0924 19:48:06.021954   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.021961   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:06.021967   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:06.022018   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:06.054871   70152 cri.go:89] found id: ""
	I0924 19:48:06.054910   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.054919   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:06.054924   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:06.054979   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:06.087218   70152 cri.go:89] found id: ""
	I0924 19:48:06.087242   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.087253   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:06.087261   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:06.087312   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:06.120137   70152 cri.go:89] found id: ""
	I0924 19:48:06.120162   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.120170   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:06.120176   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:06.120222   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:06.150804   70152 cri.go:89] found id: ""
	I0924 19:48:06.150842   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.150854   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:06.150862   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:06.150911   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:06.189829   70152 cri.go:89] found id: ""
	I0924 19:48:06.189856   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.189864   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:06.189870   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:06.189920   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:06.224712   70152 cri.go:89] found id: ""
	I0924 19:48:06.224739   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.224747   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:06.224755   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:06.224769   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:06.290644   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:06.290669   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:06.290681   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:06.369393   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:06.369427   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:06.404570   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:06.404601   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:06.456259   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:06.456288   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:04.478140   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:06.478544   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:05.991344   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:08.489768   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:07.954351   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:10.453427   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:08.969378   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:08.982058   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:08.982129   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:09.015453   70152 cri.go:89] found id: ""
	I0924 19:48:09.015475   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.015484   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:09.015489   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:09.015535   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:09.046308   70152 cri.go:89] found id: ""
	I0924 19:48:09.046332   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.046343   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:09.046350   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:09.046412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:09.077263   70152 cri.go:89] found id: ""
	I0924 19:48:09.077296   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.077308   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:09.077315   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:09.077373   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:09.109224   70152 cri.go:89] found id: ""
	I0924 19:48:09.109255   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.109267   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:09.109274   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:09.109342   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:09.144346   70152 cri.go:89] found id: ""
	I0924 19:48:09.144370   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.144378   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:09.144383   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:09.144434   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:09.175798   70152 cri.go:89] found id: ""
	I0924 19:48:09.175827   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.175843   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:09.175854   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:09.175923   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:09.211912   70152 cri.go:89] found id: ""
	I0924 19:48:09.211935   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.211942   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:09.211948   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:09.211996   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:09.242068   70152 cri.go:89] found id: ""
	I0924 19:48:09.242099   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.242110   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:09.242121   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:09.242134   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:09.306677   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:09.306696   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:09.306707   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:09.384544   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:09.384598   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:09.419555   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:09.419583   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:09.470699   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:09.470731   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:08.977847   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:11.477629   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:10.491124   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:12.990300   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:12.455219   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:14.455548   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:11.984355   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:11.997823   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:11.997879   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:12.029976   70152 cri.go:89] found id: ""
	I0924 19:48:12.030009   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.030021   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:12.030041   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:12.030187   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:12.061131   70152 cri.go:89] found id: ""
	I0924 19:48:12.061157   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.061165   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:12.061170   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:12.061223   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:12.091952   70152 cri.go:89] found id: ""
	I0924 19:48:12.091978   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.091986   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:12.091992   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:12.092039   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:12.127561   70152 cri.go:89] found id: ""
	I0924 19:48:12.127586   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.127597   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:12.127604   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:12.127688   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:12.157342   70152 cri.go:89] found id: ""
	I0924 19:48:12.157363   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.157371   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:12.157377   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:12.157449   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:12.188059   70152 cri.go:89] found id: ""
	I0924 19:48:12.188090   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.188101   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:12.188109   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:12.188163   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:12.222357   70152 cri.go:89] found id: ""
	I0924 19:48:12.222380   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.222388   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:12.222398   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:12.222456   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:12.252715   70152 cri.go:89] found id: ""
	I0924 19:48:12.252736   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.252743   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:12.252751   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:12.252761   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:12.302913   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:12.302943   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:12.315812   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:12.315840   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:12.392300   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:12.392322   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:12.392333   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:12.475042   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:12.475081   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:15.013852   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:15.026515   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:15.026586   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:15.057967   70152 cri.go:89] found id: ""
	I0924 19:48:15.057993   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.058001   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:15.058008   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:15.058063   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:15.092822   70152 cri.go:89] found id: ""
	I0924 19:48:15.092852   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.092860   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:15.092866   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:15.092914   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:15.127847   70152 cri.go:89] found id: ""
	I0924 19:48:15.127875   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.127884   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:15.127889   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:15.127941   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:15.159941   70152 cri.go:89] found id: ""
	I0924 19:48:15.159967   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.159975   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:15.159981   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:15.160035   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:15.192384   70152 cri.go:89] found id: ""
	I0924 19:48:15.192411   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.192422   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:15.192428   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:15.192481   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:15.225446   70152 cri.go:89] found id: ""
	I0924 19:48:15.225472   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.225482   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:15.225488   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:15.225546   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:15.257292   70152 cri.go:89] found id: ""
	I0924 19:48:15.257312   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.257320   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:15.257326   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:15.257377   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:15.288039   70152 cri.go:89] found id: ""
	I0924 19:48:15.288073   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.288085   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:15.288096   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:15.288110   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:15.300593   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:15.300619   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:15.365453   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:15.365482   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:15.365497   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:15.442405   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:15.442440   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:15.481003   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:15.481033   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:13.978638   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.477631   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:14.990464   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.991280   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.954405   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:18.955055   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:18.031802   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:18.044013   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:18.044070   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:18.076333   70152 cri.go:89] found id: ""
	I0924 19:48:18.076357   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.076365   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:18.076371   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:18.076421   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:18.110333   70152 cri.go:89] found id: ""
	I0924 19:48:18.110367   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.110379   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:18.110386   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:18.110457   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:18.142730   70152 cri.go:89] found id: ""
	I0924 19:48:18.142755   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.142763   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:18.142769   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:18.142848   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:18.174527   70152 cri.go:89] found id: ""
	I0924 19:48:18.174551   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.174561   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:18.174568   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:18.174623   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:18.213873   70152 cri.go:89] found id: ""
	I0924 19:48:18.213904   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.213916   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:18.213923   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:18.214019   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:18.247037   70152 cri.go:89] found id: ""
	I0924 19:48:18.247069   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.247079   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:18.247087   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:18.247167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:18.278275   70152 cri.go:89] found id: ""
	I0924 19:48:18.278302   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.278313   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:18.278319   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:18.278377   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:18.311651   70152 cri.go:89] found id: ""
	I0924 19:48:18.311679   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.311690   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:18.311702   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:18.311714   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:18.365113   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:18.365144   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:18.378675   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:18.378702   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:18.450306   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:18.450339   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:18.450353   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:18.529373   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:18.529420   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:21.065169   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:21.077517   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:21.077579   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:21.112639   70152 cri.go:89] found id: ""
	I0924 19:48:21.112663   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.112671   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:21.112677   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:21.112729   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:21.144587   70152 cri.go:89] found id: ""
	I0924 19:48:21.144608   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.144616   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:21.144625   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:21.144675   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:21.175675   70152 cri.go:89] found id: ""
	I0924 19:48:21.175697   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.175705   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:21.175710   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:21.175760   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:21.207022   70152 cri.go:89] found id: ""
	I0924 19:48:21.207044   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.207053   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:21.207058   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:21.207108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:21.238075   70152 cri.go:89] found id: ""
	I0924 19:48:21.238106   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.238118   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:21.238125   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:21.238188   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:21.269998   70152 cri.go:89] found id: ""
	I0924 19:48:21.270030   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.270040   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:21.270048   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:21.270108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:21.301274   70152 cri.go:89] found id: ""
	I0924 19:48:21.301303   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.301315   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:21.301323   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:21.301389   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:21.332082   70152 cri.go:89] found id: ""
	I0924 19:48:21.332107   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.332115   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:21.332123   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:21.332133   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:21.383713   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:21.383759   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:21.396926   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:21.396950   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:21.465280   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:21.465306   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:21.465321   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:18.477865   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:20.978484   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:19.491021   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.993922   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.454663   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:23.455041   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:25.954094   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.544724   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:21.544760   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:24.083632   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:24.095853   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:24.095909   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:24.126692   70152 cri.go:89] found id: ""
	I0924 19:48:24.126718   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.126732   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:24.126739   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:24.126794   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:24.157451   70152 cri.go:89] found id: ""
	I0924 19:48:24.157478   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.157490   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:24.157498   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:24.157548   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:24.188313   70152 cri.go:89] found id: ""
	I0924 19:48:24.188340   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.188351   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:24.188359   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:24.188406   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:24.218240   70152 cri.go:89] found id: ""
	I0924 19:48:24.218271   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.218283   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:24.218291   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:24.218348   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:24.249281   70152 cri.go:89] found id: ""
	I0924 19:48:24.249313   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.249324   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:24.249331   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:24.249391   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:24.280160   70152 cri.go:89] found id: ""
	I0924 19:48:24.280182   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.280189   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:24.280194   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:24.280246   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:24.310699   70152 cri.go:89] found id: ""
	I0924 19:48:24.310726   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.310735   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:24.310740   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:24.310792   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:24.346673   70152 cri.go:89] found id: ""
	I0924 19:48:24.346703   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.346715   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:24.346725   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:24.346738   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:24.396068   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:24.396100   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:24.408987   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:24.409014   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:24.477766   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:24.477792   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:24.477805   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:24.556507   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:24.556539   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:23.477283   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:25.477770   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.478124   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:24.491040   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:26.990109   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.954634   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:29.954918   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.099161   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:27.110953   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:27.111027   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:27.143812   70152 cri.go:89] found id: ""
	I0924 19:48:27.143838   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.143846   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:27.143852   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:27.143909   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:27.173741   70152 cri.go:89] found id: ""
	I0924 19:48:27.173766   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.173775   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:27.173780   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:27.173835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:27.203089   70152 cri.go:89] found id: ""
	I0924 19:48:27.203118   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.203128   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:27.203135   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:27.203197   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:27.234206   70152 cri.go:89] found id: ""
	I0924 19:48:27.234232   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.234240   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:27.234247   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:27.234298   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:27.265173   70152 cri.go:89] found id: ""
	I0924 19:48:27.265199   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.265207   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:27.265213   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:27.265274   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:27.294683   70152 cri.go:89] found id: ""
	I0924 19:48:27.294711   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.294722   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:27.294737   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:27.294800   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:27.327766   70152 cri.go:89] found id: ""
	I0924 19:48:27.327796   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.327804   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:27.327810   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:27.327867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:27.358896   70152 cri.go:89] found id: ""
	I0924 19:48:27.358922   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.358932   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:27.358943   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:27.358958   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:27.407245   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:27.407281   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:27.420301   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:27.420333   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:27.483150   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:27.483175   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:27.483190   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:27.558952   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:27.558988   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:30.094672   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:30.107997   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:30.108061   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:30.141210   70152 cri.go:89] found id: ""
	I0924 19:48:30.141238   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.141248   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:30.141256   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:30.141319   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:30.173799   70152 cri.go:89] found id: ""
	I0924 19:48:30.173825   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.173833   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:30.173839   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:30.173900   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:30.206653   70152 cri.go:89] found id: ""
	I0924 19:48:30.206676   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.206684   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:30.206690   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:30.206739   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:30.245268   70152 cri.go:89] found id: ""
	I0924 19:48:30.245296   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.245351   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:30.245363   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:30.245424   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:30.277515   70152 cri.go:89] found id: ""
	I0924 19:48:30.277550   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.277570   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:30.277578   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:30.277646   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:30.309533   70152 cri.go:89] found id: ""
	I0924 19:48:30.309556   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.309564   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:30.309576   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:30.309641   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:30.342113   70152 cri.go:89] found id: ""
	I0924 19:48:30.342133   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.342140   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:30.342146   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:30.342204   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:30.377786   70152 cri.go:89] found id: ""
	I0924 19:48:30.377818   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.377827   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:30.377835   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:30.377846   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:30.429612   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:30.429660   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:30.442864   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:30.442892   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:30.508899   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:30.508917   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:30.508928   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:30.585285   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:30.585316   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:29.978453   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:32.478565   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:29.489398   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:31.490231   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:33.490730   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:32.454775   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:34.455023   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:33.125617   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:33.137771   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:33.137847   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:33.169654   70152 cri.go:89] found id: ""
	I0924 19:48:33.169684   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.169694   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:33.169703   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:33.169769   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:33.205853   70152 cri.go:89] found id: ""
	I0924 19:48:33.205877   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.205884   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:33.205890   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:33.205947   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:33.239008   70152 cri.go:89] found id: ""
	I0924 19:48:33.239037   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.239048   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:33.239056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:33.239114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:33.269045   70152 cri.go:89] found id: ""
	I0924 19:48:33.269077   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.269088   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:33.269096   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:33.269158   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:33.298553   70152 cri.go:89] found id: ""
	I0924 19:48:33.298583   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.298594   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:33.298602   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:33.298663   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:33.329077   70152 cri.go:89] found id: ""
	I0924 19:48:33.329103   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.329114   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:33.329122   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:33.329181   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:33.361366   70152 cri.go:89] found id: ""
	I0924 19:48:33.361397   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.361408   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:33.361416   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:33.361465   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:33.394899   70152 cri.go:89] found id: ""
	I0924 19:48:33.394941   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.394952   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:33.394964   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:33.394978   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:33.446878   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:33.446917   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:33.460382   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:33.460408   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:33.530526   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:33.530546   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:33.530563   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:33.610520   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:33.610559   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:36.152137   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:36.165157   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:36.165225   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:36.196113   70152 cri.go:89] found id: ""
	I0924 19:48:36.196142   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.196151   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:36.196159   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:36.196223   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:36.230743   70152 cri.go:89] found id: ""
	I0924 19:48:36.230770   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.230779   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:36.230786   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:36.230870   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:36.263401   70152 cri.go:89] found id: ""
	I0924 19:48:36.263430   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.263439   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:36.263444   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:36.263492   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:36.298958   70152 cri.go:89] found id: ""
	I0924 19:48:36.298982   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.298991   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:36.298996   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:36.299053   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:36.337604   70152 cri.go:89] found id: ""
	I0924 19:48:36.337636   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.337647   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:36.337654   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:36.337717   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:36.368707   70152 cri.go:89] found id: ""
	I0924 19:48:36.368738   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.368749   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:36.368763   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:36.368833   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:36.400169   70152 cri.go:89] found id: ""
	I0924 19:48:36.400194   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.400204   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:36.400212   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:36.400277   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:36.430959   70152 cri.go:89] found id: ""
	I0924 19:48:36.430987   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.430994   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:36.431003   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:36.431015   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:48:34.478813   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:36.978477   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:35.991034   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:38.489705   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:36.954351   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:39.455405   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	W0924 19:48:36.508356   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:36.508381   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:36.508392   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:36.589376   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:36.589411   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:36.629423   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:36.629453   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:36.679281   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:36.679313   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:39.193627   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:39.207486   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:39.207564   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:39.239864   70152 cri.go:89] found id: ""
	I0924 19:48:39.239888   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.239897   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:39.239902   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:39.239950   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:39.273596   70152 cri.go:89] found id: ""
	I0924 19:48:39.273622   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.273630   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:39.273635   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:39.273685   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:39.305659   70152 cri.go:89] found id: ""
	I0924 19:48:39.305685   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.305696   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:39.305703   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:39.305762   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:39.338060   70152 cri.go:89] found id: ""
	I0924 19:48:39.338091   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.338103   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:39.338110   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:39.338167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:39.369652   70152 cri.go:89] found id: ""
	I0924 19:48:39.369680   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.369688   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:39.369694   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:39.369757   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:39.406342   70152 cri.go:89] found id: ""
	I0924 19:48:39.406365   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.406373   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:39.406379   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:39.406428   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:39.437801   70152 cri.go:89] found id: ""
	I0924 19:48:39.437824   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.437832   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:39.437838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:39.437892   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:39.476627   70152 cri.go:89] found id: ""
	I0924 19:48:39.476651   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.476662   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:39.476672   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:39.476685   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:39.528302   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:39.528332   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:39.540968   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:39.540999   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:39.606690   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:39.606716   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:39.606733   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:39.689060   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:39.689101   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:39.478198   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:41.478531   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:40.489969   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:42.491022   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:41.954586   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:44.454898   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:42.225445   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:42.238188   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:42.238262   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:42.270077   70152 cri.go:89] found id: ""
	I0924 19:48:42.270107   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.270117   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:42.270127   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:42.270189   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:42.301231   70152 cri.go:89] found id: ""
	I0924 19:48:42.301253   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.301261   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:42.301266   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:42.301311   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:42.331554   70152 cri.go:89] found id: ""
	I0924 19:48:42.331586   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.331594   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:42.331602   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:42.331662   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:42.364673   70152 cri.go:89] found id: ""
	I0924 19:48:42.364696   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.364704   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:42.364710   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:42.364755   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:42.396290   70152 cri.go:89] found id: ""
	I0924 19:48:42.396320   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.396331   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:42.396339   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:42.396400   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:42.427249   70152 cri.go:89] found id: ""
	I0924 19:48:42.427277   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.427287   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:42.427295   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:42.427356   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:42.462466   70152 cri.go:89] found id: ""
	I0924 19:48:42.462491   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.462499   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:42.462504   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:42.462557   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:42.496774   70152 cri.go:89] found id: ""
	I0924 19:48:42.496797   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.496805   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:42.496813   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:42.496825   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:42.569996   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:42.570024   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:42.570040   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:42.646881   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:42.646913   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:42.687089   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:42.687112   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:42.739266   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:42.739303   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:45.254320   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:45.266332   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:45.266404   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:45.296893   70152 cri.go:89] found id: ""
	I0924 19:48:45.296923   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.296933   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:45.296940   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:45.297003   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:45.328599   70152 cri.go:89] found id: ""
	I0924 19:48:45.328628   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.328639   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:45.328647   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:45.328704   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:45.361362   70152 cri.go:89] found id: ""
	I0924 19:48:45.361394   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.361404   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:45.361414   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:45.361475   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:45.395296   70152 cri.go:89] found id: ""
	I0924 19:48:45.395341   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.395352   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:45.395360   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:45.395424   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:45.430070   70152 cri.go:89] found id: ""
	I0924 19:48:45.430092   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.430100   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:45.430106   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:45.430151   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:45.463979   70152 cri.go:89] found id: ""
	I0924 19:48:45.464005   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.464015   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:45.464023   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:45.464085   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:45.512245   70152 cri.go:89] found id: ""
	I0924 19:48:45.512276   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.512286   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:45.512293   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:45.512353   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:45.544854   70152 cri.go:89] found id: ""
	I0924 19:48:45.544882   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.544891   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:45.544902   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:45.544915   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:45.580352   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:45.580390   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:45.630992   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:45.631025   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:45.643908   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:45.643936   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:45.715669   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:45.715689   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:45.715703   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:43.478814   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:45.978275   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:44.990088   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:46.990498   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:46.954696   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:49.455032   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:48.296204   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:48.308612   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:48.308675   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:48.339308   70152 cri.go:89] found id: ""
	I0924 19:48:48.339335   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.339345   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:48.339353   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:48.339412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:48.377248   70152 cri.go:89] found id: ""
	I0924 19:48:48.377277   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.377286   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:48.377292   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:48.377354   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:48.414199   70152 cri.go:89] found id: ""
	I0924 19:48:48.414230   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.414238   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:48.414244   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:48.414293   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:48.446262   70152 cri.go:89] found id: ""
	I0924 19:48:48.446291   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.446302   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:48.446309   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:48.446369   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:48.477125   70152 cri.go:89] found id: ""
	I0924 19:48:48.477155   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.477166   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:48.477174   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:48.477233   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:48.520836   70152 cri.go:89] found id: ""
	I0924 19:48:48.520867   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.520876   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:48.520881   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:48.520936   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:48.557787   70152 cri.go:89] found id: ""
	I0924 19:48:48.557818   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.557829   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:48.557838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:48.557897   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:48.589636   70152 cri.go:89] found id: ""
	I0924 19:48:48.589670   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.589682   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:48.589692   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:48.589706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:48.667455   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:48.667486   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:48.704523   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:48.704559   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:48.754194   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:48.754223   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:48.766550   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:48.766576   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:48.833394   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:51.333900   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:51.347028   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:51.347094   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:51.383250   70152 cri.go:89] found id: ""
	I0924 19:48:51.383277   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.383285   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:51.383292   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:51.383356   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:51.415238   70152 cri.go:89] found id: ""
	I0924 19:48:51.415269   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.415282   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:51.415289   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:51.415349   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:51.447358   70152 cri.go:89] found id: ""
	I0924 19:48:51.447388   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.447398   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:51.447407   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:51.447469   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:51.479317   70152 cri.go:89] found id: ""
	I0924 19:48:51.479345   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.479354   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:51.479362   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:51.479423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:48.477928   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:50.978108   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:49.491597   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.989509   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:53.989629   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.954573   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:54.455024   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.511976   70152 cri.go:89] found id: ""
	I0924 19:48:51.512008   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.512016   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:51.512022   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:51.512074   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:51.544785   70152 cri.go:89] found id: ""
	I0924 19:48:51.544816   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.544824   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:51.544834   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:51.544896   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:51.577475   70152 cri.go:89] found id: ""
	I0924 19:48:51.577508   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.577519   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:51.577527   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:51.577599   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:51.612499   70152 cri.go:89] found id: ""
	I0924 19:48:51.612529   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.612540   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:51.612551   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:51.612564   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:51.648429   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:51.648456   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:51.699980   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:51.700010   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:51.714695   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:51.714723   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:51.781872   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:51.781894   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:51.781909   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:54.361191   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:54.373189   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:54.373242   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:54.405816   70152 cri.go:89] found id: ""
	I0924 19:48:54.405844   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.405854   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:54.405862   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:54.405924   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:54.437907   70152 cri.go:89] found id: ""
	I0924 19:48:54.437935   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.437945   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:54.437952   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:54.438013   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:54.472020   70152 cri.go:89] found id: ""
	I0924 19:48:54.472042   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.472054   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:54.472061   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:54.472122   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:54.507185   70152 cri.go:89] found id: ""
	I0924 19:48:54.507206   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.507215   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:54.507220   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:54.507269   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:54.540854   70152 cri.go:89] found id: ""
	I0924 19:48:54.540887   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.540898   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:54.540905   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:54.540973   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:54.572764   70152 cri.go:89] found id: ""
	I0924 19:48:54.572805   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.572816   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:54.572824   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:54.572897   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:54.605525   70152 cri.go:89] found id: ""
	I0924 19:48:54.605565   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.605573   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:54.605579   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:54.605652   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:54.637320   70152 cri.go:89] found id: ""
	I0924 19:48:54.637341   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.637350   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:54.637357   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:54.637367   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:54.691398   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:54.691433   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:54.704780   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:54.704805   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:54.779461   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:54.779487   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:54.779502   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:54.858131   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:54.858168   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:52.978487   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:55.477749   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:57.479091   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:55.989883   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:58.490132   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:56.954088   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:58.954576   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:00.955423   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:57.393677   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:57.406202   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:57.406273   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:57.439351   70152 cri.go:89] found id: ""
	I0924 19:48:57.439381   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.439388   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:57.439394   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:57.439440   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:57.476966   70152 cri.go:89] found id: ""
	I0924 19:48:57.476993   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.477002   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:57.477007   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:57.477064   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:57.510947   70152 cri.go:89] found id: ""
	I0924 19:48:57.510975   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.510986   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:57.510994   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:57.511054   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:57.544252   70152 cri.go:89] found id: ""
	I0924 19:48:57.544277   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.544285   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:57.544292   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:57.544342   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:57.576781   70152 cri.go:89] found id: ""
	I0924 19:48:57.576810   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.576821   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:57.576829   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:57.576892   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:57.614243   70152 cri.go:89] found id: ""
	I0924 19:48:57.614269   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.614277   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:57.614283   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:57.614349   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:57.653477   70152 cri.go:89] found id: ""
	I0924 19:48:57.653506   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.653517   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:57.653524   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:57.653598   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:57.701253   70152 cri.go:89] found id: ""
	I0924 19:48:57.701283   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.701291   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:57.701299   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:57.701311   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:57.721210   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:57.721239   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:57.799693   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:57.799720   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:57.799735   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:57.881561   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:57.881597   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:57.917473   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:57.917506   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:00.471475   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:00.485727   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:00.485801   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:00.518443   70152 cri.go:89] found id: ""
	I0924 19:49:00.518472   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.518483   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:00.518490   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:00.518555   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:00.553964   70152 cri.go:89] found id: ""
	I0924 19:49:00.553991   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.554001   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:00.554009   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:00.554074   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:00.585507   70152 cri.go:89] found id: ""
	I0924 19:49:00.585537   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.585548   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:00.585555   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:00.585614   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:00.618214   70152 cri.go:89] found id: ""
	I0924 19:49:00.618242   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.618253   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:00.618260   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:00.618319   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:00.649042   70152 cri.go:89] found id: ""
	I0924 19:49:00.649069   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.649077   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:00.649083   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:00.649133   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:00.681021   70152 cri.go:89] found id: ""
	I0924 19:49:00.681050   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.681060   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:00.681067   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:00.681128   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:00.712608   70152 cri.go:89] found id: ""
	I0924 19:49:00.712631   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.712640   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:00.712646   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:00.712693   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:00.744523   70152 cri.go:89] found id: ""
	I0924 19:49:00.744561   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.744572   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:00.744584   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:00.744604   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:00.757179   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:00.757202   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:00.822163   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:00.822186   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:00.822197   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:00.897080   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:00.897125   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:00.934120   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:00.934149   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:59.977468   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:01.978394   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:00.491533   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:02.990346   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:03.454971   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:05.954492   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:03.487555   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:03.500318   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:03.500372   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:03.531327   70152 cri.go:89] found id: ""
	I0924 19:49:03.531355   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.531364   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:03.531372   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:03.531437   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:03.563445   70152 cri.go:89] found id: ""
	I0924 19:49:03.563480   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.563491   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:03.563498   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:03.563564   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:03.602093   70152 cri.go:89] found id: ""
	I0924 19:49:03.602118   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.602126   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:03.602134   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:03.602184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:03.633729   70152 cri.go:89] found id: ""
	I0924 19:49:03.633758   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.633769   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:03.633777   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:03.633838   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:03.664122   70152 cri.go:89] found id: ""
	I0924 19:49:03.664144   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.664154   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:03.664162   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:03.664227   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:03.697619   70152 cri.go:89] found id: ""
	I0924 19:49:03.697647   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.697656   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:03.697661   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:03.697714   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:03.729679   70152 cri.go:89] found id: ""
	I0924 19:49:03.729706   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.729714   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:03.729719   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:03.729768   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:03.760459   70152 cri.go:89] found id: ""
	I0924 19:49:03.760489   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.760497   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:03.760505   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:03.760517   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:03.772452   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:03.772475   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:03.836658   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:03.836690   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:03.836706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:03.911243   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:03.911274   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:03.947676   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:03.947699   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:04.478117   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:06.977766   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:04.992137   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:07.490741   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:07.955747   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:10.453756   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:06.501947   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:06.513963   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:06.514037   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:06.546355   70152 cri.go:89] found id: ""
	I0924 19:49:06.546382   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.546393   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:06.546401   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:06.546460   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:06.577502   70152 cri.go:89] found id: ""
	I0924 19:49:06.577530   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.577542   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:06.577554   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:06.577606   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:06.611622   70152 cri.go:89] found id: ""
	I0924 19:49:06.611644   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.611652   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:06.611658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:06.611716   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:06.646558   70152 cri.go:89] found id: ""
	I0924 19:49:06.646581   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.646589   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:06.646594   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:06.646656   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:06.678247   70152 cri.go:89] found id: ""
	I0924 19:49:06.678271   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.678282   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:06.678289   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:06.678351   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:06.718816   70152 cri.go:89] found id: ""
	I0924 19:49:06.718861   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.718874   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:06.718889   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:06.718952   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:06.751762   70152 cri.go:89] found id: ""
	I0924 19:49:06.751787   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.751798   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:06.751806   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:06.751867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:06.783466   70152 cri.go:89] found id: ""
	I0924 19:49:06.783494   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.783502   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:06.783511   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:06.783523   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:06.796746   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:06.796773   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:06.860579   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:06.860608   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:06.860627   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:06.933363   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:06.933394   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:06.973189   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:06.973214   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:09.525823   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:09.537933   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:09.537986   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:09.568463   70152 cri.go:89] found id: ""
	I0924 19:49:09.568492   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.568503   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:09.568511   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:09.568566   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:09.598218   70152 cri.go:89] found id: ""
	I0924 19:49:09.598250   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.598261   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:09.598268   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:09.598325   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:09.631792   70152 cri.go:89] found id: ""
	I0924 19:49:09.631817   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.631828   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:09.631839   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:09.631906   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:09.668544   70152 cri.go:89] found id: ""
	I0924 19:49:09.668578   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.668586   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:09.668592   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:09.668643   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:09.699088   70152 cri.go:89] found id: ""
	I0924 19:49:09.699117   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.699126   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:09.699132   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:09.699192   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:09.731239   70152 cri.go:89] found id: ""
	I0924 19:49:09.731262   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.731273   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:09.731280   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:09.731341   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:09.764349   70152 cri.go:89] found id: ""
	I0924 19:49:09.764372   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.764380   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:09.764386   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:09.764443   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:09.795675   70152 cri.go:89] found id: ""
	I0924 19:49:09.795698   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.795707   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:09.795715   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:09.795733   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:09.829109   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:09.829133   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:09.882630   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:09.882666   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:09.894968   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:09.894992   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:09.955378   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:09.955400   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:09.955415   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:09.477323   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:11.477732   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:09.991122   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.490229   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.453790   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:14.454415   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.537431   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:12.549816   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:12.549878   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:12.585422   70152 cri.go:89] found id: ""
	I0924 19:49:12.585445   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.585453   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:12.585459   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:12.585505   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:12.621367   70152 cri.go:89] found id: ""
	I0924 19:49:12.621391   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.621401   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:12.621408   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:12.621471   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:12.656570   70152 cri.go:89] found id: ""
	I0924 19:49:12.656596   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.656603   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:12.656611   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:12.656671   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:12.691193   70152 cri.go:89] found id: ""
	I0924 19:49:12.691215   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.691225   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:12.691233   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:12.691291   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:12.725507   70152 cri.go:89] found id: ""
	I0924 19:49:12.725535   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.725546   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:12.725554   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:12.725614   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:12.757046   70152 cri.go:89] found id: ""
	I0924 19:49:12.757072   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.757083   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:12.757091   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:12.757148   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:12.787049   70152 cri.go:89] found id: ""
	I0924 19:49:12.787075   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.787083   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:12.787088   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:12.787136   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:12.820797   70152 cri.go:89] found id: ""
	I0924 19:49:12.820823   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.820831   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:12.820841   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:12.820859   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:12.873430   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:12.873462   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:12.886207   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:12.886234   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:12.957602   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:12.957623   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:12.957637   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:13.034776   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:13.034811   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:15.571177   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:15.583916   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:15.583981   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:15.618698   70152 cri.go:89] found id: ""
	I0924 19:49:15.618722   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.618730   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:15.618735   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:15.618787   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:15.653693   70152 cri.go:89] found id: ""
	I0924 19:49:15.653726   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.653747   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:15.653755   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:15.653817   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:15.683926   70152 cri.go:89] found id: ""
	I0924 19:49:15.683955   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.683966   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:15.683974   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:15.684031   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:15.718671   70152 cri.go:89] found id: ""
	I0924 19:49:15.718704   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.718716   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:15.718724   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:15.718784   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:15.748861   70152 cri.go:89] found id: ""
	I0924 19:49:15.748892   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.748904   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:15.748911   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:15.748985   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:15.778209   70152 cri.go:89] found id: ""
	I0924 19:49:15.778241   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.778252   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:15.778259   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:15.778323   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:15.808159   70152 cri.go:89] found id: ""
	I0924 19:49:15.808184   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.808192   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:15.808197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:15.808257   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:15.840960   70152 cri.go:89] found id: ""
	I0924 19:49:15.840987   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.840995   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:15.841003   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:15.841016   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:15.891229   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:15.891259   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:15.903910   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:15.903935   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:15.967036   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:15.967061   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:15.967074   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:16.046511   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:16.046545   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:13.477971   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:15.478378   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:14.990141   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:16.990237   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.990750   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:16.954729   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.954769   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.586369   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:18.598590   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:18.598680   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:18.631438   70152 cri.go:89] found id: ""
	I0924 19:49:18.631465   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.631476   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:18.631484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:18.631545   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:18.663461   70152 cri.go:89] found id: ""
	I0924 19:49:18.663484   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.663491   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:18.663497   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:18.663556   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:18.696292   70152 cri.go:89] found id: ""
	I0924 19:49:18.696373   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.696398   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:18.696411   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:18.696475   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:18.728037   70152 cri.go:89] found id: ""
	I0924 19:49:18.728062   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.728073   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:18.728079   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:18.728139   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:18.759784   70152 cri.go:89] found id: ""
	I0924 19:49:18.759819   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.759830   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:18.759838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:18.759902   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:18.791856   70152 cri.go:89] found id: ""
	I0924 19:49:18.791886   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.791893   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:18.791899   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:18.791959   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:18.822678   70152 cri.go:89] found id: ""
	I0924 19:49:18.822708   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.822719   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:18.822730   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:18.822794   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:18.852967   70152 cri.go:89] found id: ""
	I0924 19:49:18.852988   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.852996   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:18.853005   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:18.853016   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:18.902600   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:18.902634   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:18.915475   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:18.915505   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:18.980260   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:18.980285   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:18.980299   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:19.064950   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:19.064986   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:17.977250   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:19.977563   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.977702   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.490563   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:23.989915   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.454031   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:23.954281   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:25.955057   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.603752   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:21.616039   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:21.616107   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:21.648228   70152 cri.go:89] found id: ""
	I0924 19:49:21.648253   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.648266   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:21.648274   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:21.648331   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:21.679823   70152 cri.go:89] found id: ""
	I0924 19:49:21.679850   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.679858   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:21.679866   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:21.679928   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:21.712860   70152 cri.go:89] found id: ""
	I0924 19:49:21.712886   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.712895   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:21.712900   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:21.712951   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:21.749711   70152 cri.go:89] found id: ""
	I0924 19:49:21.749735   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.749742   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:21.749748   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:21.749793   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:21.784536   70152 cri.go:89] found id: ""
	I0924 19:49:21.784559   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.784567   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:21.784573   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:21.784631   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:21.813864   70152 cri.go:89] found id: ""
	I0924 19:49:21.813896   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.813907   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:21.813916   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:21.813981   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:21.843610   70152 cri.go:89] found id: ""
	I0924 19:49:21.843639   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.843647   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:21.843653   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:21.843704   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:21.874367   70152 cri.go:89] found id: ""
	I0924 19:49:21.874393   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.874401   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:21.874410   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:21.874421   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:21.923539   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:21.923567   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:21.936994   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:21.937018   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:22.004243   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:22.004264   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:22.004277   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:22.079890   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:22.079921   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:24.616140   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:24.628197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:24.628257   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:24.660873   70152 cri.go:89] found id: ""
	I0924 19:49:24.660902   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.660912   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:24.660919   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:24.660978   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:24.691592   70152 cri.go:89] found id: ""
	I0924 19:49:24.691618   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.691627   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:24.691633   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:24.691682   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:24.725803   70152 cri.go:89] found id: ""
	I0924 19:49:24.725835   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.725843   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:24.725849   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:24.725911   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:24.760080   70152 cri.go:89] found id: ""
	I0924 19:49:24.760112   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.760124   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:24.760131   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:24.760198   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:24.792487   70152 cri.go:89] found id: ""
	I0924 19:49:24.792517   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.792527   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:24.792535   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:24.792615   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:24.825037   70152 cri.go:89] found id: ""
	I0924 19:49:24.825058   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.825066   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:24.825072   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:24.825117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:24.857009   70152 cri.go:89] found id: ""
	I0924 19:49:24.857037   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.857048   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:24.857062   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:24.857119   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:24.887963   70152 cri.go:89] found id: ""
	I0924 19:49:24.887986   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.887994   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:24.888001   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:24.888012   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:24.941971   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:24.942008   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:24.956355   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:24.956385   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:25.020643   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:25.020671   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:25.020686   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:25.095261   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:25.095295   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:24.477423   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:26.477967   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:25.990406   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:28.490276   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:28.454466   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.955002   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:27.632228   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:27.645002   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:27.645059   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:27.677386   70152 cri.go:89] found id: ""
	I0924 19:49:27.677411   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.677420   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:27.677427   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:27.677487   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:27.709731   70152 cri.go:89] found id: ""
	I0924 19:49:27.709760   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.709771   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:27.709779   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:27.709846   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:27.741065   70152 cri.go:89] found id: ""
	I0924 19:49:27.741092   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.741100   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:27.741106   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:27.741165   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:27.771493   70152 cri.go:89] found id: ""
	I0924 19:49:27.771515   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.771524   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:27.771531   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:27.771592   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:27.803233   70152 cri.go:89] found id: ""
	I0924 19:49:27.803266   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.803273   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:27.803279   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:27.803341   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:27.837295   70152 cri.go:89] found id: ""
	I0924 19:49:27.837320   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.837331   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:27.837341   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:27.837412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:27.867289   70152 cri.go:89] found id: ""
	I0924 19:49:27.867314   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.867323   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:27.867328   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:27.867374   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:27.896590   70152 cri.go:89] found id: ""
	I0924 19:49:27.896615   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.896623   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:27.896634   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:27.896646   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:27.944564   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:27.944596   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:27.958719   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:27.958740   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:28.028986   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:28.029011   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:28.029027   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:28.103888   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:28.103920   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:30.639148   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:30.651500   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:30.651570   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:30.689449   70152 cri.go:89] found id: ""
	I0924 19:49:30.689472   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.689481   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:30.689488   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:30.689566   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:30.722953   70152 cri.go:89] found id: ""
	I0924 19:49:30.722982   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.722993   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:30.723004   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:30.723057   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:30.760960   70152 cri.go:89] found id: ""
	I0924 19:49:30.760985   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.760996   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:30.761004   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:30.761066   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:30.794784   70152 cri.go:89] found id: ""
	I0924 19:49:30.794812   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.794821   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:30.794842   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:30.794894   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:30.826127   70152 cri.go:89] found id: ""
	I0924 19:49:30.826155   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.826164   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:30.826172   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:30.826235   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:30.857392   70152 cri.go:89] found id: ""
	I0924 19:49:30.857422   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.857432   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:30.857446   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:30.857510   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:30.887561   70152 cri.go:89] found id: ""
	I0924 19:49:30.887588   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.887600   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:30.887622   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:30.887692   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:30.922486   70152 cri.go:89] found id: ""
	I0924 19:49:30.922514   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.922526   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:30.922537   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:30.922551   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:30.972454   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:30.972480   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:30.986873   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:30.986895   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:31.060505   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:31.060525   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:31.060544   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:31.138923   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:31.138955   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:28.977756   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.980419   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.989909   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:32.991815   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:33.454204   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.454890   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:33.674979   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:33.687073   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:33.687149   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:33.719712   70152 cri.go:89] found id: ""
	I0924 19:49:33.719742   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.719751   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:33.719757   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:33.719810   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:33.751183   70152 cri.go:89] found id: ""
	I0924 19:49:33.751210   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.751221   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:33.751229   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:33.751274   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:33.781748   70152 cri.go:89] found id: ""
	I0924 19:49:33.781781   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.781793   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:33.781801   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:33.781873   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:33.813287   70152 cri.go:89] found id: ""
	I0924 19:49:33.813311   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.813319   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:33.813324   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:33.813369   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:33.848270   70152 cri.go:89] found id: ""
	I0924 19:49:33.848299   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.848311   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:33.848319   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:33.848383   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:33.877790   70152 cri.go:89] found id: ""
	I0924 19:49:33.877817   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.877826   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:33.877832   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:33.877890   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:33.911668   70152 cri.go:89] found id: ""
	I0924 19:49:33.911693   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.911701   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:33.911706   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:33.911759   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:33.943924   70152 cri.go:89] found id: ""
	I0924 19:49:33.943952   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.943963   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:33.943974   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:33.943987   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:33.980520   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:33.980560   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:34.031240   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:34.031275   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:34.044180   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:34.044210   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:34.110143   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:34.110165   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:34.110176   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:33.477340   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.478344   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.490449   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:37.989317   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:37.954444   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:39.954569   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:36.694093   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:36.706006   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:36.706080   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:36.738955   70152 cri.go:89] found id: ""
	I0924 19:49:36.738981   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.738990   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:36.738995   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:36.739059   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:36.774414   70152 cri.go:89] found id: ""
	I0924 19:49:36.774437   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.774445   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:36.774451   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:36.774503   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:36.805821   70152 cri.go:89] found id: ""
	I0924 19:49:36.805851   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.805861   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:36.805867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:36.805922   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:36.835128   70152 cri.go:89] found id: ""
	I0924 19:49:36.835154   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.835162   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:36.835168   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:36.835221   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:36.865448   70152 cri.go:89] found id: ""
	I0924 19:49:36.865474   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.865485   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:36.865492   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:36.865552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:36.896694   70152 cri.go:89] found id: ""
	I0924 19:49:36.896722   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.896731   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:36.896736   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:36.896801   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:36.927380   70152 cri.go:89] found id: ""
	I0924 19:49:36.927406   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.927416   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:36.927426   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:36.927484   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:36.957581   70152 cri.go:89] found id: ""
	I0924 19:49:36.957604   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.957614   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:36.957624   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:36.957638   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:37.007182   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:37.007211   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:37.021536   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:37.021561   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:37.092442   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:37.092465   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:37.092477   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:37.167488   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:37.167524   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:39.703778   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:39.715914   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:39.715983   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:39.751296   70152 cri.go:89] found id: ""
	I0924 19:49:39.751319   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.751329   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:39.751341   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:39.751409   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:39.787095   70152 cri.go:89] found id: ""
	I0924 19:49:39.787123   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.787132   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:39.787137   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:39.787184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:39.822142   70152 cri.go:89] found id: ""
	I0924 19:49:39.822164   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.822173   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:39.822179   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:39.822226   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:39.853830   70152 cri.go:89] found id: ""
	I0924 19:49:39.853854   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.853864   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:39.853871   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:39.853932   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:39.891029   70152 cri.go:89] found id: ""
	I0924 19:49:39.891079   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.891091   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:39.891100   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:39.891162   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:39.926162   70152 cri.go:89] found id: ""
	I0924 19:49:39.926194   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.926204   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:39.926211   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:39.926262   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:39.964320   70152 cri.go:89] found id: ""
	I0924 19:49:39.964348   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.964358   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:39.964365   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:39.964421   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:39.997596   70152 cri.go:89] found id: ""
	I0924 19:49:39.997617   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.997627   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:39.997636   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:39.997649   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:40.045538   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:40.045568   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:40.058114   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:40.058139   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:40.125927   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:40.125946   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:40.125958   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:40.202722   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:40.202758   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:37.978393   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:40.476855   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.477425   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:39.990444   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:41.991094   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.454568   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:44.953805   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.742707   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:42.754910   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:42.754986   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:42.788775   70152 cri.go:89] found id: ""
	I0924 19:49:42.788798   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.788807   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:42.788813   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:42.788875   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:42.824396   70152 cri.go:89] found id: ""
	I0924 19:49:42.824420   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.824430   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:42.824436   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:42.824498   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:42.854848   70152 cri.go:89] found id: ""
	I0924 19:49:42.854873   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.854880   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:42.854886   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:42.854936   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:42.885033   70152 cri.go:89] found id: ""
	I0924 19:49:42.885056   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.885063   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:42.885069   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:42.885114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:42.914427   70152 cri.go:89] found id: ""
	I0924 19:49:42.914451   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.914458   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:42.914464   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:42.914509   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:42.954444   70152 cri.go:89] found id: ""
	I0924 19:49:42.954471   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.954481   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:42.954488   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:42.954544   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:42.998183   70152 cri.go:89] found id: ""
	I0924 19:49:42.998207   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.998215   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:42.998220   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:42.998273   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:43.041904   70152 cri.go:89] found id: ""
	I0924 19:49:43.041933   70152 logs.go:276] 0 containers: []
	W0924 19:49:43.041944   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:43.041957   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:43.041973   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:43.091733   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:43.091770   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:43.104674   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:43.104707   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:43.169712   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:43.169732   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:43.169745   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:43.248378   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:43.248409   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:45.790015   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:45.801902   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:45.801972   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:45.833030   70152 cri.go:89] found id: ""
	I0924 19:49:45.833053   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.833061   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:45.833066   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:45.833117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:45.863209   70152 cri.go:89] found id: ""
	I0924 19:49:45.863233   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.863241   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:45.863247   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:45.863307   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:45.893004   70152 cri.go:89] found id: ""
	I0924 19:49:45.893035   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.893045   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:45.893053   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:45.893114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:45.924485   70152 cri.go:89] found id: ""
	I0924 19:49:45.924515   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.924527   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:45.924535   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:45.924593   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:45.956880   70152 cri.go:89] found id: ""
	I0924 19:49:45.956907   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.956914   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:45.956919   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:45.956967   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:45.990579   70152 cri.go:89] found id: ""
	I0924 19:49:45.990602   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.990614   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:45.990622   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:45.990677   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:46.025905   70152 cri.go:89] found id: ""
	I0924 19:49:46.025944   70152 logs.go:276] 0 containers: []
	W0924 19:49:46.025959   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:46.025966   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:46.026028   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:46.057401   70152 cri.go:89] found id: ""
	I0924 19:49:46.057427   70152 logs.go:276] 0 containers: []
	W0924 19:49:46.057438   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:46.057449   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:46.057463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:46.107081   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:46.107115   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:46.121398   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:46.121426   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:46.184370   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:46.184395   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:46.184410   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:46.266061   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:46.266104   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:44.477907   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.478391   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:44.489995   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.989227   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.990995   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.953875   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.955013   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.803970   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:48.816671   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:48.816737   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:48.849566   70152 cri.go:89] found id: ""
	I0924 19:49:48.849628   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.849652   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:48.849660   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:48.849720   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:48.885963   70152 cri.go:89] found id: ""
	I0924 19:49:48.885992   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.885999   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:48.886004   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:48.886054   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:48.921710   70152 cri.go:89] found id: ""
	I0924 19:49:48.921744   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.921755   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:48.921765   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:48.921821   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:48.954602   70152 cri.go:89] found id: ""
	I0924 19:49:48.954639   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.954650   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:48.954658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:48.954718   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:48.988071   70152 cri.go:89] found id: ""
	I0924 19:49:48.988098   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.988109   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:48.988117   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:48.988177   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:49.020475   70152 cri.go:89] found id: ""
	I0924 19:49:49.020503   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.020512   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:49.020519   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:49.020597   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:49.055890   70152 cri.go:89] found id: ""
	I0924 19:49:49.055915   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.055925   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:49.055933   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:49.055999   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:49.092976   70152 cri.go:89] found id: ""
	I0924 19:49:49.093010   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.093022   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:49.093033   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:49.093051   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:49.106598   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:49.106623   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:49.175320   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:49.175349   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:49.175362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:49.252922   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:49.252953   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:49.292364   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:49.292391   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:48.977530   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:50.978078   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.489983   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:53.990114   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.454857   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:53.954413   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:55.955245   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.843520   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:51.855864   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:51.855930   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:51.885300   70152 cri.go:89] found id: ""
	I0924 19:49:51.885329   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.885342   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:51.885350   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:51.885407   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:51.915183   70152 cri.go:89] found id: ""
	I0924 19:49:51.915212   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.915223   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:51.915230   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:51.915286   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:51.944774   70152 cri.go:89] found id: ""
	I0924 19:49:51.944797   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.944807   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:51.944815   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:51.944886   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:51.983691   70152 cri.go:89] found id: ""
	I0924 19:49:51.983718   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.983729   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:51.983737   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:51.983791   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:52.019728   70152 cri.go:89] found id: ""
	I0924 19:49:52.019760   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.019770   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:52.019776   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:52.019835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:52.055405   70152 cri.go:89] found id: ""
	I0924 19:49:52.055435   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.055446   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:52.055453   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:52.055518   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:52.088417   70152 cri.go:89] found id: ""
	I0924 19:49:52.088447   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.088457   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:52.088465   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:52.088527   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:52.119496   70152 cri.go:89] found id: ""
	I0924 19:49:52.119527   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.119539   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:52.119550   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:52.119563   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:52.193494   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:52.193529   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:52.231440   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:52.231464   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:52.281384   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:52.281418   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:52.293893   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:52.293919   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:52.362404   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:54.863156   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:54.876871   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:54.876946   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:54.909444   70152 cri.go:89] found id: ""
	I0924 19:49:54.909471   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.909478   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:54.909484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:54.909536   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:54.939687   70152 cri.go:89] found id: ""
	I0924 19:49:54.939715   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.939726   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:54.939733   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:54.939805   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:54.971156   70152 cri.go:89] found id: ""
	I0924 19:49:54.971180   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.971188   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:54.971193   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:54.971244   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:55.001865   70152 cri.go:89] found id: ""
	I0924 19:49:55.001891   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.001899   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:55.001904   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:55.001961   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:55.032044   70152 cri.go:89] found id: ""
	I0924 19:49:55.032072   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.032084   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:55.032092   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:55.032152   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:55.061644   70152 cri.go:89] found id: ""
	I0924 19:49:55.061667   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.061675   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:55.061681   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:55.061727   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:55.093015   70152 cri.go:89] found id: ""
	I0924 19:49:55.093041   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.093049   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:55.093055   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:55.093121   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:55.126041   70152 cri.go:89] found id: ""
	I0924 19:49:55.126065   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.126073   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:55.126081   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:55.126091   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:55.168803   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:55.168826   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:55.227121   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:55.227158   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:55.249868   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:55.249893   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:55.316401   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:55.316422   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:55.316434   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:52.978705   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:55.478802   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:56.489685   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:58.990273   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:58.453854   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.954407   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:57.898654   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:57.910667   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:57.910728   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:57.942696   70152 cri.go:89] found id: ""
	I0924 19:49:57.942722   70152 logs.go:276] 0 containers: []
	W0924 19:49:57.942730   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:57.942736   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:57.942802   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:57.981222   70152 cri.go:89] found id: ""
	I0924 19:49:57.981244   70152 logs.go:276] 0 containers: []
	W0924 19:49:57.981254   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:57.981261   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:57.981308   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:58.013135   70152 cri.go:89] found id: ""
	I0924 19:49:58.013174   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.013185   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:58.013193   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:58.013255   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:58.048815   70152 cri.go:89] found id: ""
	I0924 19:49:58.048847   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.048859   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:58.048867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:58.048933   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:58.081365   70152 cri.go:89] found id: ""
	I0924 19:49:58.081395   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.081406   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:58.081413   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:58.081478   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:58.112804   70152 cri.go:89] found id: ""
	I0924 19:49:58.112828   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.112838   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:58.112848   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:58.112913   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:58.147412   70152 cri.go:89] found id: ""
	I0924 19:49:58.147448   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.147459   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:58.147467   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:58.147529   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:58.178922   70152 cri.go:89] found id: ""
	I0924 19:49:58.178952   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.178963   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:58.178974   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:58.178993   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:58.250967   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:58.250993   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:58.251011   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:58.329734   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:58.329767   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:58.366692   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:58.366722   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:58.418466   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:58.418503   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:00.931624   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:00.949687   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:00.949756   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:01.004428   70152 cri.go:89] found id: ""
	I0924 19:50:01.004456   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.004464   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:01.004471   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:01.004532   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:01.038024   70152 cri.go:89] found id: ""
	I0924 19:50:01.038050   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.038060   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:01.038065   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:01.038111   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:01.069831   70152 cri.go:89] found id: ""
	I0924 19:50:01.069855   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.069862   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:01.069867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:01.069933   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:01.100918   70152 cri.go:89] found id: ""
	I0924 19:50:01.100944   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.100951   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:01.100957   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:01.101006   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:01.131309   70152 cri.go:89] found id: ""
	I0924 19:50:01.131340   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.131351   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:01.131359   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:01.131419   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:01.161779   70152 cri.go:89] found id: ""
	I0924 19:50:01.161806   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.161817   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:01.161825   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:01.161888   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:01.196626   70152 cri.go:89] found id: ""
	I0924 19:50:01.196655   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.196665   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:01.196672   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:01.196733   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:01.226447   70152 cri.go:89] found id: ""
	I0924 19:50:01.226475   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.226486   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:01.226496   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:01.226510   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:01.279093   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:01.279121   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:01.292435   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:01.292463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:01.360868   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:01.360901   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:01.360917   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:01.442988   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:01.443021   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:57.978989   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.477211   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:02.477451   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.990593   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:03.489738   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:02.955427   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:05.455000   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:03.984021   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:03.997429   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:03.997508   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:04.030344   70152 cri.go:89] found id: ""
	I0924 19:50:04.030374   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.030387   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:04.030395   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:04.030448   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:04.063968   70152 cri.go:89] found id: ""
	I0924 19:50:04.064003   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.064016   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:04.064023   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:04.064083   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:04.097724   70152 cri.go:89] found id: ""
	I0924 19:50:04.097752   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.097764   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:04.097772   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:04.097825   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:04.129533   70152 cri.go:89] found id: ""
	I0924 19:50:04.129570   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.129580   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:04.129588   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:04.129665   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:04.166056   70152 cri.go:89] found id: ""
	I0924 19:50:04.166086   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.166098   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:04.166105   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:04.166164   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:04.200051   70152 cri.go:89] found id: ""
	I0924 19:50:04.200077   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.200087   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:04.200094   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:04.200205   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:04.232647   70152 cri.go:89] found id: ""
	I0924 19:50:04.232671   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.232679   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:04.232686   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:04.232744   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:04.264091   70152 cri.go:89] found id: ""
	I0924 19:50:04.264115   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.264123   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:04.264131   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:04.264140   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:04.313904   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:04.313939   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:04.326759   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:04.326782   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:04.390347   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:04.390372   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:04.390389   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:04.470473   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:04.470509   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:04.478092   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:06.976928   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:05.490259   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.490644   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.954747   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:10.455548   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.009267   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:07.022465   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:07.022534   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:07.053438   70152 cri.go:89] found id: ""
	I0924 19:50:07.053466   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.053476   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:07.053484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:07.053552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:07.085802   70152 cri.go:89] found id: ""
	I0924 19:50:07.085824   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.085833   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:07.085840   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:07.085903   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:07.121020   70152 cri.go:89] found id: ""
	I0924 19:50:07.121043   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.121051   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:07.121056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:07.121108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:07.150529   70152 cri.go:89] found id: ""
	I0924 19:50:07.150557   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.150568   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:07.150576   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:07.150663   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:07.181915   70152 cri.go:89] found id: ""
	I0924 19:50:07.181942   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.181953   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:07.181959   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:07.182021   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:07.215152   70152 cri.go:89] found id: ""
	I0924 19:50:07.215185   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.215195   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:07.215203   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:07.215263   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:07.248336   70152 cri.go:89] found id: ""
	I0924 19:50:07.248365   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.248373   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:07.248378   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:07.248423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:07.281829   70152 cri.go:89] found id: ""
	I0924 19:50:07.281854   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.281862   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:07.281871   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:07.281885   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:07.329674   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:07.329706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:07.342257   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:07.342283   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:07.406426   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:07.406452   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:07.406466   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:07.493765   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:07.493796   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:10.033393   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:10.046435   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:10.046513   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:10.077993   70152 cri.go:89] found id: ""
	I0924 19:50:10.078024   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.078034   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:10.078044   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:10.078108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:10.115200   70152 cri.go:89] found id: ""
	I0924 19:50:10.115232   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.115243   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:10.115251   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:10.115317   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:10.151154   70152 cri.go:89] found id: ""
	I0924 19:50:10.151179   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.151189   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:10.151197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:10.151254   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:10.184177   70152 cri.go:89] found id: ""
	I0924 19:50:10.184204   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.184212   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:10.184218   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:10.184268   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:10.218932   70152 cri.go:89] found id: ""
	I0924 19:50:10.218962   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.218973   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:10.218981   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:10.219042   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:10.250973   70152 cri.go:89] found id: ""
	I0924 19:50:10.251001   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.251012   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:10.251020   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:10.251076   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:10.280296   70152 cri.go:89] found id: ""
	I0924 19:50:10.280319   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.280328   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:10.280333   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:10.280385   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:10.312386   70152 cri.go:89] found id: ""
	I0924 19:50:10.312411   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.312419   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:10.312426   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:10.312437   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:10.377281   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:10.377309   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:10.377326   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:10.451806   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:10.451839   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:10.489154   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:10.489184   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:10.536203   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:10.536233   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:08.977378   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:10.977966   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:09.990141   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:11.990257   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:13.990360   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:12.954861   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.454763   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:13.049785   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:13.062642   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:13.062720   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:13.096627   70152 cri.go:89] found id: ""
	I0924 19:50:13.096658   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.096669   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:13.096680   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:13.096743   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:13.127361   70152 cri.go:89] found id: ""
	I0924 19:50:13.127389   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.127400   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:13.127409   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:13.127468   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:13.160081   70152 cri.go:89] found id: ""
	I0924 19:50:13.160111   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.160123   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:13.160131   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:13.160184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:13.192955   70152 cri.go:89] found id: ""
	I0924 19:50:13.192986   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.192997   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:13.193004   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:13.193057   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:13.230978   70152 cri.go:89] found id: ""
	I0924 19:50:13.231000   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.231008   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:13.231014   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:13.231064   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:13.262146   70152 cri.go:89] found id: ""
	I0924 19:50:13.262179   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.262190   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:13.262198   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:13.262258   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:13.297019   70152 cri.go:89] found id: ""
	I0924 19:50:13.297054   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.297063   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:13.297070   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:13.297117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:13.327009   70152 cri.go:89] found id: ""
	I0924 19:50:13.327037   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.327046   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:13.327057   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:13.327073   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:13.375465   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:13.375493   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:13.389851   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:13.389884   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:13.452486   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:13.452524   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:13.452538   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:13.531372   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:13.531405   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:16.066979   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:16.079767   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:16.079825   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:16.110927   70152 cri.go:89] found id: ""
	I0924 19:50:16.110951   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.110960   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:16.110965   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:16.111011   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:16.142012   70152 cri.go:89] found id: ""
	I0924 19:50:16.142040   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.142050   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:16.142055   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:16.142112   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:16.175039   70152 cri.go:89] found id: ""
	I0924 19:50:16.175068   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.175079   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:16.175086   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:16.175146   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:16.206778   70152 cri.go:89] found id: ""
	I0924 19:50:16.206800   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.206808   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:16.206814   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:16.206890   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:16.237724   70152 cri.go:89] found id: ""
	I0924 19:50:16.237752   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.237763   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:16.237770   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:16.237835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:16.268823   70152 cri.go:89] found id: ""
	I0924 19:50:16.268846   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.268855   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:16.268861   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:16.268931   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:16.301548   70152 cri.go:89] found id: ""
	I0924 19:50:16.301570   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.301578   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:16.301584   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:16.301635   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:16.334781   70152 cri.go:89] found id: ""
	I0924 19:50:16.334812   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.334820   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:16.334844   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:16.334864   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:16.384025   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:16.384057   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:16.396528   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:16.396556   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:16.460428   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:16.460458   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:16.460472   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:12.978203   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.477525   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.478192   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.990394   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.991181   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.955580   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:20.455446   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:16.541109   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:16.541146   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:19.078388   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:19.090964   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:19.091052   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:19.122890   70152 cri.go:89] found id: ""
	I0924 19:50:19.122915   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.122923   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:19.122928   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:19.122988   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:19.155983   70152 cri.go:89] found id: ""
	I0924 19:50:19.156013   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.156024   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:19.156031   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:19.156085   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:19.190366   70152 cri.go:89] found id: ""
	I0924 19:50:19.190389   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.190397   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:19.190403   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:19.190459   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:19.221713   70152 cri.go:89] found id: ""
	I0924 19:50:19.221737   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.221745   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:19.221751   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:19.221809   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:19.256586   70152 cri.go:89] found id: ""
	I0924 19:50:19.256615   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.256625   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:19.256637   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:19.256700   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:19.288092   70152 cri.go:89] found id: ""
	I0924 19:50:19.288119   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.288130   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:19.288141   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:19.288204   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:19.320743   70152 cri.go:89] found id: ""
	I0924 19:50:19.320771   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.320780   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:19.320785   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:19.320837   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:19.352967   70152 cri.go:89] found id: ""
	I0924 19:50:19.352999   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.353009   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:19.353019   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:19.353035   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:19.365690   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:19.365715   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:19.431204   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:19.431229   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:19.431244   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:19.512030   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:19.512063   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:19.549631   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:19.549664   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:19.977859   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:21.978267   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:20.489819   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.490667   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.954178   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:24.954267   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.105290   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:22.117532   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:22.117607   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:22.147959   70152 cri.go:89] found id: ""
	I0924 19:50:22.147983   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.147994   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:22.148002   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:22.148060   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:22.178511   70152 cri.go:89] found id: ""
	I0924 19:50:22.178540   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.178551   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:22.178556   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:22.178603   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:22.210030   70152 cri.go:89] found id: ""
	I0924 19:50:22.210054   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.210061   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:22.210067   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:22.210125   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:22.243010   70152 cri.go:89] found id: ""
	I0924 19:50:22.243037   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.243048   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:22.243056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:22.243117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:22.273021   70152 cri.go:89] found id: ""
	I0924 19:50:22.273051   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.273062   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:22.273069   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:22.273133   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:22.303372   70152 cri.go:89] found id: ""
	I0924 19:50:22.303403   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.303415   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:22.303422   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:22.303481   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:22.335124   70152 cri.go:89] found id: ""
	I0924 19:50:22.335150   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.335158   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:22.335164   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:22.335222   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:22.368230   70152 cri.go:89] found id: ""
	I0924 19:50:22.368255   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.368265   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:22.368276   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:22.368290   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:22.418998   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:22.419031   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:22.431654   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:22.431684   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:22.505336   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:22.505354   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:22.505367   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:22.584941   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:22.584976   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:25.127489   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:25.140142   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:25.140216   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:25.169946   70152 cri.go:89] found id: ""
	I0924 19:50:25.169974   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.169982   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:25.169988   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:25.170049   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:25.203298   70152 cri.go:89] found id: ""
	I0924 19:50:25.203328   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.203349   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:25.203357   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:25.203419   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:25.236902   70152 cri.go:89] found id: ""
	I0924 19:50:25.236930   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.236941   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:25.236949   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:25.237011   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:25.268295   70152 cri.go:89] found id: ""
	I0924 19:50:25.268318   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.268328   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:25.268333   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:25.268388   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:25.299869   70152 cri.go:89] found id: ""
	I0924 19:50:25.299899   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.299911   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:25.299919   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:25.299978   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:25.332373   70152 cri.go:89] found id: ""
	I0924 19:50:25.332400   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.332411   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:25.332418   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:25.332477   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:25.365791   70152 cri.go:89] found id: ""
	I0924 19:50:25.365820   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.365831   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:25.365839   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:25.365904   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:25.398170   70152 cri.go:89] found id: ""
	I0924 19:50:25.398193   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.398201   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:25.398209   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:25.398220   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:25.447933   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:25.447967   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:25.461244   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:25.461269   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:25.528100   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:25.528125   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:25.528138   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:25.603029   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:25.603062   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:24.477585   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:26.477776   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:24.491205   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:26.990562   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:27.454650   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:29.954657   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:28.141635   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:28.154551   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:28.154611   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:28.186275   70152 cri.go:89] found id: ""
	I0924 19:50:28.186299   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.186307   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:28.186312   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:28.186371   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:28.218840   70152 cri.go:89] found id: ""
	I0924 19:50:28.218868   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.218879   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:28.218887   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:28.218955   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:28.253478   70152 cri.go:89] found id: ""
	I0924 19:50:28.253503   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.253512   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:28.253519   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:28.253579   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:28.284854   70152 cri.go:89] found id: ""
	I0924 19:50:28.284888   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.284899   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:28.284908   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:28.284959   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:28.315453   70152 cri.go:89] found id: ""
	I0924 19:50:28.315478   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.315487   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:28.315500   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:28.315550   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:28.347455   70152 cri.go:89] found id: ""
	I0924 19:50:28.347484   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.347492   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:28.347498   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:28.347552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:28.383651   70152 cri.go:89] found id: ""
	I0924 19:50:28.383683   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.383694   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:28.383702   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:28.383766   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:28.424649   70152 cri.go:89] found id: ""
	I0924 19:50:28.424682   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.424693   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:28.424704   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:28.424718   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:28.477985   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:28.478020   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:28.490902   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:28.490930   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:28.561252   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:28.561273   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:28.561284   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:28.635590   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:28.635635   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:31.172062   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:31.184868   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:31.184939   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:31.216419   70152 cri.go:89] found id: ""
	I0924 19:50:31.216446   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.216456   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:31.216464   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:31.216525   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:31.252757   70152 cri.go:89] found id: ""
	I0924 19:50:31.252787   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.252797   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:31.252804   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:31.252867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:31.287792   70152 cri.go:89] found id: ""
	I0924 19:50:31.287820   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.287827   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:31.287833   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:31.287883   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:31.322891   70152 cri.go:89] found id: ""
	I0924 19:50:31.322917   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.322927   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:31.322934   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:31.322997   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:31.358353   70152 cri.go:89] found id: ""
	I0924 19:50:31.358384   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.358394   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:31.358401   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:31.358461   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:31.388617   70152 cri.go:89] found id: ""
	I0924 19:50:31.388643   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.388654   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:31.388661   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:31.388714   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:31.421655   70152 cri.go:89] found id: ""
	I0924 19:50:31.421682   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.421690   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:31.421695   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:31.421747   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:31.456995   70152 cri.go:89] found id: ""
	I0924 19:50:31.457020   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.457029   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:31.457037   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:31.457048   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:28.478052   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:30.977483   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:29.490310   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:31.990052   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:33.991439   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:32.454421   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:34.456333   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:31.507691   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:31.507725   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:31.521553   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:31.521582   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:31.587673   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:31.587695   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:31.587710   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:31.674153   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:31.674193   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:34.213947   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:34.227779   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:34.227852   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:34.265513   70152 cri.go:89] found id: ""
	I0924 19:50:34.265541   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.265568   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:34.265575   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:34.265632   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:34.305317   70152 cri.go:89] found id: ""
	I0924 19:50:34.305340   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.305348   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:34.305354   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:34.305402   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:34.341144   70152 cri.go:89] found id: ""
	I0924 19:50:34.341168   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.341176   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:34.341183   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:34.341232   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:34.372469   70152 cri.go:89] found id: ""
	I0924 19:50:34.372491   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.372499   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:34.372505   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:34.372551   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:34.408329   70152 cri.go:89] found id: ""
	I0924 19:50:34.408351   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.408360   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:34.408365   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:34.408423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:34.440666   70152 cri.go:89] found id: ""
	I0924 19:50:34.440695   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.440707   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:34.440714   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:34.440782   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:34.475013   70152 cri.go:89] found id: ""
	I0924 19:50:34.475040   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.475047   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:34.475053   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:34.475105   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:34.507051   70152 cri.go:89] found id: ""
	I0924 19:50:34.507077   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.507084   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:34.507092   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:34.507102   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:34.562506   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:34.562549   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:34.575316   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:34.575340   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:34.641903   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:34.641927   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:34.641938   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:34.719868   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:34.719903   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:32.978271   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:35.477581   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:37.479350   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:36.490263   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:38.490795   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:36.953906   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:38.955474   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:37.279465   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:37.291991   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:37.292065   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:37.322097   70152 cri.go:89] found id: ""
	I0924 19:50:37.322123   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.322134   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:37.322141   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:37.322199   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:37.353697   70152 cri.go:89] found id: ""
	I0924 19:50:37.353729   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.353740   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:37.353748   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:37.353807   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:37.385622   70152 cri.go:89] found id: ""
	I0924 19:50:37.385653   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.385664   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:37.385672   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:37.385735   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:37.420972   70152 cri.go:89] found id: ""
	I0924 19:50:37.420995   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.421004   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:37.421012   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:37.421070   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:37.451496   70152 cri.go:89] found id: ""
	I0924 19:50:37.451523   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.451534   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:37.451541   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:37.451619   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:37.486954   70152 cri.go:89] found id: ""
	I0924 19:50:37.486982   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.486992   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:37.487000   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:37.487061   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:37.523068   70152 cri.go:89] found id: ""
	I0924 19:50:37.523089   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.523097   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:37.523105   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:37.523165   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:37.559935   70152 cri.go:89] found id: ""
	I0924 19:50:37.559962   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.559970   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:37.559978   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:37.559988   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:37.597976   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:37.598006   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:37.647577   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:37.647610   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:37.660872   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:37.660901   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:37.728264   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:37.728293   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:37.728307   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:40.308026   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:40.320316   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:40.320373   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:40.357099   70152 cri.go:89] found id: ""
	I0924 19:50:40.357127   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.357137   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:40.357145   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:40.357207   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:40.390676   70152 cri.go:89] found id: ""
	I0924 19:50:40.390703   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.390712   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:40.390717   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:40.390766   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:40.422752   70152 cri.go:89] found id: ""
	I0924 19:50:40.422784   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.422796   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:40.422804   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:40.422887   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:40.457024   70152 cri.go:89] found id: ""
	I0924 19:50:40.457046   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.457054   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:40.457059   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:40.457106   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:40.503120   70152 cri.go:89] found id: ""
	I0924 19:50:40.503149   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.503160   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:40.503168   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:40.503225   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:40.543399   70152 cri.go:89] found id: ""
	I0924 19:50:40.543426   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.543435   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:40.543441   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:40.543487   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:40.577654   70152 cri.go:89] found id: ""
	I0924 19:50:40.577679   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.577690   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:40.577698   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:40.577754   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:40.610097   70152 cri.go:89] found id: ""
	I0924 19:50:40.610120   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.610128   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:40.610136   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:40.610145   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:40.661400   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:40.661436   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:40.674254   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:40.674284   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:40.740319   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:40.740342   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:40.740352   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:40.818666   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:40.818704   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:39.979184   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:41.981561   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:40.491417   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:42.991420   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:41.454480   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:43.456158   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:45.955070   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:43.356693   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:43.369234   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:43.369295   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:43.407933   70152 cri.go:89] found id: ""
	I0924 19:50:43.407960   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.407971   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:43.407978   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:43.408037   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:43.442923   70152 cri.go:89] found id: ""
	I0924 19:50:43.442956   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.442968   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:43.442979   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:43.443029   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:43.478148   70152 cri.go:89] found id: ""
	I0924 19:50:43.478177   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.478189   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:43.478197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:43.478256   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:43.515029   70152 cri.go:89] found id: ""
	I0924 19:50:43.515060   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.515071   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:43.515079   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:43.515144   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:43.551026   70152 cri.go:89] found id: ""
	I0924 19:50:43.551058   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.551070   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:43.551077   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:43.551140   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:43.587155   70152 cri.go:89] found id: ""
	I0924 19:50:43.587188   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.587197   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:43.587205   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:43.587263   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:43.620935   70152 cri.go:89] found id: ""
	I0924 19:50:43.620958   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.620976   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:43.620984   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:43.621045   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:43.654477   70152 cri.go:89] found id: ""
	I0924 19:50:43.654512   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.654523   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:43.654534   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:43.654546   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:43.689352   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:43.689385   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:43.742646   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:43.742683   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:43.755773   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:43.755798   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:43.818546   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:43.818577   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:43.818595   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:46.397466   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:46.410320   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:46.410392   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:46.443003   70152 cri.go:89] found id: ""
	I0924 19:50:46.443029   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.443041   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:46.443049   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:46.443114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:46.484239   70152 cri.go:89] found id: ""
	I0924 19:50:46.484264   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.484274   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:46.484282   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:46.484339   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:43.981787   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:46.478489   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:45.489723   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:47.491171   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:47.955545   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:50.454211   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:46.519192   70152 cri.go:89] found id: ""
	I0924 19:50:46.519221   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.519230   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:46.519236   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:46.519286   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:46.554588   70152 cri.go:89] found id: ""
	I0924 19:50:46.554611   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.554619   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:46.554626   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:46.554685   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:46.586074   70152 cri.go:89] found id: ""
	I0924 19:50:46.586101   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.586110   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:46.586116   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:46.586167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:46.620119   70152 cri.go:89] found id: ""
	I0924 19:50:46.620149   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.620159   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:46.620166   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:46.620226   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:46.653447   70152 cri.go:89] found id: ""
	I0924 19:50:46.653477   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.653488   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:46.653495   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:46.653557   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:46.686079   70152 cri.go:89] found id: ""
	I0924 19:50:46.686105   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.686116   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:46.686127   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:46.686140   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:46.699847   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:46.699891   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:46.766407   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:46.766432   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:46.766447   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:46.846697   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:46.846730   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:46.901551   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:46.901578   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:49.460047   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:49.473516   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:49.473586   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:49.508180   70152 cri.go:89] found id: ""
	I0924 19:50:49.508211   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.508220   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:49.508226   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:49.508289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:49.540891   70152 cri.go:89] found id: ""
	I0924 19:50:49.540920   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.540928   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:49.540934   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:49.540984   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:49.577008   70152 cri.go:89] found id: ""
	I0924 19:50:49.577038   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.577048   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:49.577054   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:49.577132   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:49.615176   70152 cri.go:89] found id: ""
	I0924 19:50:49.615206   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.615216   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:49.615226   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:49.615289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:49.653135   70152 cri.go:89] found id: ""
	I0924 19:50:49.653167   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.653177   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:49.653184   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:49.653250   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:49.691032   70152 cri.go:89] found id: ""
	I0924 19:50:49.691064   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.691074   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:49.691080   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:49.691143   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:49.725243   70152 cri.go:89] found id: ""
	I0924 19:50:49.725274   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.725287   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:49.725294   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:49.725363   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:49.759288   70152 cri.go:89] found id: ""
	I0924 19:50:49.759316   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.759325   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:49.759333   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:49.759345   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:49.831323   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:49.831345   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:49.831362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:49.907302   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:49.907336   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:49.946386   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:49.946424   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:50.002321   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:50.002362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:48.978153   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:51.477442   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:49.991214   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.490034   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.454585   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:54.455120   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.517380   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:52.531613   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:52.531671   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:52.568158   70152 cri.go:89] found id: ""
	I0924 19:50:52.568188   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.568199   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:52.568207   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:52.568258   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:52.606203   70152 cri.go:89] found id: ""
	I0924 19:50:52.606232   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.606241   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:52.606247   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:52.606307   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:52.647180   70152 cri.go:89] found id: ""
	I0924 19:50:52.647206   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.647218   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:52.647226   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:52.647290   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:52.692260   70152 cri.go:89] found id: ""
	I0924 19:50:52.692289   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.692308   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:52.692316   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:52.692382   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:52.745648   70152 cri.go:89] found id: ""
	I0924 19:50:52.745673   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.745684   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:52.745693   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:52.745759   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:52.782429   70152 cri.go:89] found id: ""
	I0924 19:50:52.782451   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.782458   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:52.782463   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:52.782510   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:52.817286   70152 cri.go:89] found id: ""
	I0924 19:50:52.817312   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.817320   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:52.817326   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:52.817387   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:52.851401   70152 cri.go:89] found id: ""
	I0924 19:50:52.851433   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.851442   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:52.851451   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:52.851463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:52.921634   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:52.921661   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:52.921674   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:53.005676   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:53.005710   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:53.042056   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:53.042092   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:53.092871   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:53.092908   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:55.605865   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:55.618713   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:55.618791   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:55.652326   70152 cri.go:89] found id: ""
	I0924 19:50:55.652354   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.652364   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:55.652372   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:55.652434   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:55.686218   70152 cri.go:89] found id: ""
	I0924 19:50:55.686241   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.686249   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:55.686256   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:55.686318   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:55.718678   70152 cri.go:89] found id: ""
	I0924 19:50:55.718704   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.718713   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:55.718720   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:55.718789   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:55.750122   70152 cri.go:89] found id: ""
	I0924 19:50:55.750149   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.750157   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:55.750163   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:55.750213   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:55.780676   70152 cri.go:89] found id: ""
	I0924 19:50:55.780706   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.780717   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:55.780724   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:55.780806   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:55.814742   70152 cri.go:89] found id: ""
	I0924 19:50:55.814771   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.814783   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:55.814790   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:55.814872   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:55.847599   70152 cri.go:89] found id: ""
	I0924 19:50:55.847624   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.847635   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:55.847643   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:55.847708   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:55.882999   70152 cri.go:89] found id: ""
	I0924 19:50:55.883025   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.883034   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:55.883042   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:55.883053   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:55.948795   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:55.948823   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:55.948840   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:56.032946   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:56.032984   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:56.069628   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:56.069657   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:56.118408   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:56.118444   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:53.478043   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:53.979410   69576 pod_ready.go:82] duration metric: took 4m0.007472265s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	E0924 19:50:53.979439   69576 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0924 19:50:53.979449   69576 pod_ready.go:39] duration metric: took 4m5.045187364s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:50:53.979468   69576 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:50:53.979501   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:53.979557   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:54.014613   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:54.014636   69576 cri.go:89] found id: ""
	I0924 19:50:54.014646   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:50:54.014702   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.019232   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:54.019304   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:54.054018   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:54.054042   69576 cri.go:89] found id: ""
	I0924 19:50:54.054050   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:50:54.054111   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.057867   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:54.057937   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:54.090458   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:54.090485   69576 cri.go:89] found id: ""
	I0924 19:50:54.090495   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:50:54.090549   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.094660   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:54.094735   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:54.128438   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:54.128462   69576 cri.go:89] found id: ""
	I0924 19:50:54.128471   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:50:54.128524   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.132209   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:54.132261   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:54.170563   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:54.170584   69576 cri.go:89] found id: ""
	I0924 19:50:54.170591   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:50:54.170640   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.174546   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:54.174615   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:54.211448   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:54.211468   69576 cri.go:89] found id: ""
	I0924 19:50:54.211475   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:50:54.211521   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.215297   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:54.215350   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:54.252930   69576 cri.go:89] found id: ""
	I0924 19:50:54.252955   69576 logs.go:276] 0 containers: []
	W0924 19:50:54.252963   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:54.252969   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:50:54.253023   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:50:54.296111   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:54.296135   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:54.296141   69576 cri.go:89] found id: ""
	I0924 19:50:54.296148   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:50:54.296194   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.299983   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.303864   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:50:54.303899   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:54.340679   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:54.340703   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:54.867298   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:50:54.867333   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:54.908630   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:54.908659   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:54.974028   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:50:54.974059   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:55.034164   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:50:55.034200   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:55.070416   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:50:55.070453   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:55.107831   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:50:55.107857   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:55.143183   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:55.143215   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:55.160049   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:55.160082   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:50:55.267331   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:50:55.267367   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:55.310718   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:50:55.310750   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:55.349628   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:50:55.349656   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:54.990762   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:57.490198   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:56.954742   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:58.955989   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:58.631571   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:58.645369   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:58.645437   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:58.679988   70152 cri.go:89] found id: ""
	I0924 19:50:58.680016   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.680027   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:58.680034   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:58.680095   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:58.717081   70152 cri.go:89] found id: ""
	I0924 19:50:58.717104   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.717114   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:58.717121   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:58.717182   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:58.749093   70152 cri.go:89] found id: ""
	I0924 19:50:58.749115   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.749124   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:58.749129   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:58.749175   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:58.785026   70152 cri.go:89] found id: ""
	I0924 19:50:58.785056   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.785078   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:58.785086   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:58.785174   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:58.821615   70152 cri.go:89] found id: ""
	I0924 19:50:58.821641   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.821651   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:58.821658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:58.821718   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:58.857520   70152 cri.go:89] found id: ""
	I0924 19:50:58.857549   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.857561   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:58.857569   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:58.857638   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:58.892972   70152 cri.go:89] found id: ""
	I0924 19:50:58.892997   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.893008   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:58.893016   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:58.893082   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:58.924716   70152 cri.go:89] found id: ""
	I0924 19:50:58.924743   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.924756   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:58.924764   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:58.924776   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:58.961221   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:58.961249   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:59.013865   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:59.013892   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:59.028436   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:59.028472   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:59.099161   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:59.099187   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:59.099201   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:57.916622   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:57.931591   69576 api_server.go:72] duration metric: took 4m15.73662766s to wait for apiserver process to appear ...
	I0924 19:50:57.931630   69576 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:50:57.931675   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:57.931721   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:57.969570   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:57.969597   69576 cri.go:89] found id: ""
	I0924 19:50:57.969604   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:50:57.969650   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:57.973550   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:57.973602   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:58.015873   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:58.015897   69576 cri.go:89] found id: ""
	I0924 19:50:58.015907   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:50:58.015959   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.020777   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:58.020848   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:58.052771   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:58.052792   69576 cri.go:89] found id: ""
	I0924 19:50:58.052801   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:50:58.052861   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.056640   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:58.056709   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:58.092869   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:58.092888   69576 cri.go:89] found id: ""
	I0924 19:50:58.092894   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:50:58.092949   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.097223   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:58.097293   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:58.131376   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:58.131403   69576 cri.go:89] found id: ""
	I0924 19:50:58.131414   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:50:58.131498   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.135886   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:58.135943   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:58.171962   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:58.171985   69576 cri.go:89] found id: ""
	I0924 19:50:58.171992   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:50:58.172037   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.175714   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:58.175770   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:58.209329   69576 cri.go:89] found id: ""
	I0924 19:50:58.209358   69576 logs.go:276] 0 containers: []
	W0924 19:50:58.209366   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:58.209372   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:50:58.209432   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:50:58.242311   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:58.242331   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:58.242336   69576 cri.go:89] found id: ""
	I0924 19:50:58.242344   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:50:58.242399   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.246774   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.250891   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:58.250909   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:58.736768   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:58.736811   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:50:58.838645   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:50:58.838673   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:58.884334   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:50:58.884366   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:58.933785   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:50:58.933817   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:58.968065   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:50:58.968099   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:59.007212   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:50:59.007238   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:59.067571   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:50:59.067608   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:59.103890   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:50:59.103913   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:59.157991   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:59.158021   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:59.225690   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:59.225724   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:59.239742   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:50:59.239768   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:59.272319   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:50:59.272354   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:01.809089   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:51:01.813972   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0924 19:51:01.815080   69576 api_server.go:141] control plane version: v1.31.1
	I0924 19:51:01.815100   69576 api_server.go:131] duration metric: took 3.883463484s to wait for apiserver health ...
	I0924 19:51:01.815107   69576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:51:01.815127   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:51:01.815166   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:51:01.857140   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:51:01.857164   69576 cri.go:89] found id: ""
	I0924 19:51:01.857174   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:51:01.857235   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.861136   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:51:01.861199   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:51:01.894133   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:51:01.894156   69576 cri.go:89] found id: ""
	I0924 19:51:01.894165   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:51:01.894222   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.898001   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:51:01.898073   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:51:01.933652   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:51:01.933677   69576 cri.go:89] found id: ""
	I0924 19:51:01.933686   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:51:01.933762   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.938487   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:51:01.938549   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:51:01.979500   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:01.979527   69576 cri.go:89] found id: ""
	I0924 19:51:01.979536   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:51:01.979597   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.983762   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:51:01.983827   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:51:02.024402   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:51:02.024427   69576 cri.go:89] found id: ""
	I0924 19:51:02.024436   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:51:02.024501   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.028273   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:51:02.028330   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:51:02.070987   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:51:02.071006   69576 cri.go:89] found id: ""
	I0924 19:51:02.071013   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:51:02.071058   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.076176   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:51:02.076244   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:51:02.119921   69576 cri.go:89] found id: ""
	I0924 19:51:02.119950   69576 logs.go:276] 0 containers: []
	W0924 19:51:02.119960   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:51:02.119967   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:51:02.120026   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:51:02.156531   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:51:02.156562   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:51:02.156568   69576 cri.go:89] found id: ""
	I0924 19:51:02.156577   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:51:02.156643   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.161262   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.165581   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:51:02.165602   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:51:02.216300   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:51:02.216327   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:51:02.262879   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:51:02.262909   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:59.490689   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:01.992004   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:02.984419   69904 pod_ready.go:82] duration metric: took 4m0.00033045s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" ...
	E0924 19:51:02.984461   69904 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 19:51:02.984478   69904 pod_ready.go:39] duration metric: took 4m13.271652912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:02.984508   69904 kubeadm.go:597] duration metric: took 4m21.208228185s to restartPrimaryControlPlane
	W0924 19:51:02.984576   69904 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:02.984610   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:02.643876   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:51:02.643917   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:51:02.680131   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:51:02.680170   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:51:02.693192   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:51:02.693225   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:51:02.788649   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:51:02.788678   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:51:02.836539   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:51:02.836571   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:51:02.889363   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:51:02.889393   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:02.925388   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:51:02.925416   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:51:02.962512   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:51:02.962545   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:51:02.999119   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:51:02.999144   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:51:03.072647   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:51:03.072683   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:51:05.629114   69576 system_pods.go:59] 8 kube-system pods found
	I0924 19:51:05.629141   69576 system_pods.go:61] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running
	I0924 19:51:05.629145   69576 system_pods.go:61] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running
	I0924 19:51:05.629149   69576 system_pods.go:61] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running
	I0924 19:51:05.629153   69576 system_pods.go:61] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running
	I0924 19:51:05.629156   69576 system_pods.go:61] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running
	I0924 19:51:05.629159   69576 system_pods.go:61] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running
	I0924 19:51:05.629164   69576 system_pods.go:61] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:05.629169   69576 system_pods.go:61] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:51:05.629177   69576 system_pods.go:74] duration metric: took 3.814063168s to wait for pod list to return data ...
	I0924 19:51:05.629183   69576 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:51:05.632105   69576 default_sa.go:45] found service account: "default"
	I0924 19:51:05.632126   69576 default_sa.go:55] duration metric: took 2.937635ms for default service account to be created ...
	I0924 19:51:05.632133   69576 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:51:05.637121   69576 system_pods.go:86] 8 kube-system pods found
	I0924 19:51:05.637152   69576 system_pods.go:89] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running
	I0924 19:51:05.637160   69576 system_pods.go:89] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running
	I0924 19:51:05.637167   69576 system_pods.go:89] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running
	I0924 19:51:05.637174   69576 system_pods.go:89] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running
	I0924 19:51:05.637179   69576 system_pods.go:89] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running
	I0924 19:51:05.637185   69576 system_pods.go:89] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running
	I0924 19:51:05.637196   69576 system_pods.go:89] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:05.637205   69576 system_pods.go:89] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:51:05.637214   69576 system_pods.go:126] duration metric: took 5.075319ms to wait for k8s-apps to be running ...
	I0924 19:51:05.637222   69576 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:51:05.637264   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:05.654706   69576 system_svc.go:56] duration metric: took 17.472783ms WaitForService to wait for kubelet
	I0924 19:51:05.654809   69576 kubeadm.go:582] duration metric: took 4m23.459841471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:51:05.654865   69576 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:51:05.658334   69576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:51:05.658353   69576 node_conditions.go:123] node cpu capacity is 2
	I0924 19:51:05.658363   69576 node_conditions.go:105] duration metric: took 3.492035ms to run NodePressure ...
	I0924 19:51:05.658373   69576 start.go:241] waiting for startup goroutines ...
	I0924 19:51:05.658379   69576 start.go:246] waiting for cluster config update ...
	I0924 19:51:05.658389   69576 start.go:255] writing updated cluster config ...
	I0924 19:51:05.658691   69576 ssh_runner.go:195] Run: rm -f paused
	I0924 19:51:05.706059   69576 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:51:05.708303   69576 out.go:177] * Done! kubectl is now configured to use "no-preload-965745" cluster and "default" namespace by default
	I0924 19:51:01.454367   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:03.954114   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:05.955269   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:01.696298   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:01.709055   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:51:01.709132   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:51:01.741383   70152 cri.go:89] found id: ""
	I0924 19:51:01.741409   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.741416   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:51:01.741422   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:51:01.741476   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:51:01.773123   70152 cri.go:89] found id: ""
	I0924 19:51:01.773148   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.773156   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:51:01.773162   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:51:01.773221   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:51:01.806752   70152 cri.go:89] found id: ""
	I0924 19:51:01.806784   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.806792   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:51:01.806798   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:51:01.806928   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:51:01.851739   70152 cri.go:89] found id: ""
	I0924 19:51:01.851769   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.851780   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:51:01.851786   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:51:01.851850   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:51:01.885163   70152 cri.go:89] found id: ""
	I0924 19:51:01.885192   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.885201   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:51:01.885207   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:51:01.885255   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:51:01.918891   70152 cri.go:89] found id: ""
	I0924 19:51:01.918918   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.918929   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:51:01.918936   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:51:01.918996   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:51:01.953367   70152 cri.go:89] found id: ""
	I0924 19:51:01.953394   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.953403   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:51:01.953411   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:51:01.953468   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:51:01.993937   70152 cri.go:89] found id: ""
	I0924 19:51:01.993961   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.993970   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:51:01.993981   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:51:01.993993   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:51:02.049467   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:51:02.049503   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:51:02.065074   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:51:02.065117   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:51:02.141811   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:51:02.141837   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:51:02.141852   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:51:02.224507   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:51:02.224534   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:51:04.766806   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:04.779518   70152 kubeadm.go:597] duration metric: took 4m3.458373s to restartPrimaryControlPlane
	W0924 19:51:04.779588   70152 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:04.779617   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:09.285959   70152 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.506320559s)
	I0924 19:51:09.286033   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:09.299784   70152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:51:09.311238   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:51:09.320580   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:51:09.320603   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:51:09.320658   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:51:09.329216   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:51:09.329281   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:51:09.337964   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:51:09.346324   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:51:09.346383   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:51:09.354788   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:51:09.363191   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:51:09.363249   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:51:09.372141   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:51:09.380290   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:51:09.380344   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:51:09.388996   70152 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:51:09.456034   70152 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:51:09.456144   70152 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:51:09.585473   70152 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:51:09.585697   70152 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:51:09.585935   70152 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:51:09.749623   70152 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:51:09.751504   70152 out.go:235]   - Generating certificates and keys ...
	I0924 19:51:09.751599   70152 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:51:09.751702   70152 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:51:09.751845   70152 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:51:09.751955   70152 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:51:09.752059   70152 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:51:09.752137   70152 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:51:09.752237   70152 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:51:09.752332   70152 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:51:09.752430   70152 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:51:09.752536   70152 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:51:09.752602   70152 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:51:09.752683   70152 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:51:09.881554   70152 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:51:10.269203   70152 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:51:10.518480   70152 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:51:10.712060   70152 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:51:10.727454   70152 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:51:10.728411   70152 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:51:10.728478   70152 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:51:10.849448   70152 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:51:08.454350   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:10.455005   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:10.851100   70152 out.go:235]   - Booting up control plane ...
	I0924 19:51:10.851237   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:51:10.860097   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:51:10.860987   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:51:10.861716   70152 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:51:10.863845   70152 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:51:12.954243   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:14.957843   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:17.453731   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:19.453953   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:21.454522   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:23.455166   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:25.953843   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:29.077330   69904 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.092691625s)
	I0924 19:51:29.077484   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:29.091493   69904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:51:29.101026   69904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:51:29.109749   69904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:51:29.109768   69904 kubeadm.go:157] found existing configuration files:
	
	I0924 19:51:29.109814   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 19:51:29.118177   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:51:29.118225   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:51:29.126963   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 19:51:29.135458   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:51:29.135514   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:51:29.144373   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 19:51:29.153026   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:51:29.153104   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:51:29.162719   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 19:51:29.171667   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:51:29.171722   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:51:29.180370   69904 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:51:29.220747   69904 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 19:51:29.220873   69904 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:51:29.319144   69904 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:51:29.319289   69904 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:51:29.319416   69904 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 19:51:29.328410   69904 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:51:29.329855   69904 out.go:235]   - Generating certificates and keys ...
	I0924 19:51:29.329956   69904 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:51:29.330042   69904 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:51:29.330148   69904 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:51:29.330251   69904 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:51:29.330369   69904 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:51:29.330451   69904 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:51:29.330557   69904 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:51:29.330668   69904 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:51:29.330772   69904 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:51:29.330900   69904 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:51:29.330966   69904 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:51:29.331042   69904 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:51:29.504958   69904 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:51:29.642370   69904 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 19:51:29.735556   69904 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:51:29.870700   69904 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:51:30.048778   69904 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:51:30.049481   69904 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:51:30.052686   69904 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:51:27.954118   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:29.955223   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:30.054684   69904 out.go:235]   - Booting up control plane ...
	I0924 19:51:30.054786   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:51:30.054935   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:51:30.055710   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:51:30.073679   69904 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:51:30.079375   69904 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:51:30.079437   69904 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:51:30.208692   69904 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 19:51:30.208799   69904 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 19:51:31.210485   69904 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001878491s
	I0924 19:51:31.210602   69904 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 19:51:35.712648   69904 kubeadm.go:310] [api-check] The API server is healthy after 4.501942024s
	I0924 19:51:35.726167   69904 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 19:51:35.745115   69904 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 19:51:35.778631   69904 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 19:51:35.778910   69904 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-093771 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 19:51:35.793809   69904 kubeadm.go:310] [bootstrap-token] Using token: joc3du.4csctmt42s6jz0an
	I0924 19:51:31.955402   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:33.956250   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:35.949705   69408 pod_ready.go:82] duration metric: took 4m0.001155579s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" ...
	E0924 19:51:35.949733   69408 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 19:51:35.949755   69408 pod_ready.go:39] duration metric: took 4m8.530526042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:35.949787   69408 kubeadm.go:597] duration metric: took 4m16.768464943s to restartPrimaryControlPlane
	W0924 19:51:35.949874   69408 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:35.949908   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:35.795255   69904 out.go:235]   - Configuring RBAC rules ...
	I0924 19:51:35.795389   69904 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 19:51:35.800809   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 19:51:35.819531   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 19:51:35.825453   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 19:51:35.831439   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 19:51:35.835651   69904 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 19:51:36.119903   69904 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 19:51:36.554891   69904 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 19:51:37.120103   69904 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 19:51:37.121012   69904 kubeadm.go:310] 
	I0924 19:51:37.121125   69904 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 19:51:37.121146   69904 kubeadm.go:310] 
	I0924 19:51:37.121242   69904 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 19:51:37.121260   69904 kubeadm.go:310] 
	I0924 19:51:37.121309   69904 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 19:51:37.121403   69904 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 19:51:37.121469   69904 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 19:51:37.121477   69904 kubeadm.go:310] 
	I0924 19:51:37.121557   69904 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 19:51:37.121578   69904 kubeadm.go:310] 
	I0924 19:51:37.121659   69904 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 19:51:37.121674   69904 kubeadm.go:310] 
	I0924 19:51:37.121765   69904 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 19:51:37.121891   69904 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 19:51:37.122002   69904 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 19:51:37.122013   69904 kubeadm.go:310] 
	I0924 19:51:37.122122   69904 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 19:51:37.122239   69904 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 19:51:37.122247   69904 kubeadm.go:310] 
	I0924 19:51:37.122333   69904 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token joc3du.4csctmt42s6jz0an \
	I0924 19:51:37.122470   69904 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 19:51:37.122509   69904 kubeadm.go:310] 	--control-plane 
	I0924 19:51:37.122520   69904 kubeadm.go:310] 
	I0924 19:51:37.122598   69904 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 19:51:37.122606   69904 kubeadm.go:310] 
	I0924 19:51:37.122720   69904 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token joc3du.4csctmt42s6jz0an \
	I0924 19:51:37.122884   69904 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 19:51:37.124443   69904 kubeadm.go:310] W0924 19:51:29.206815    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:51:37.124730   69904 kubeadm.go:310] W0924 19:51:29.207506    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:51:37.124872   69904 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:51:37.124908   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:51:37.124921   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:51:37.126897   69904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:51:37.128457   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:51:37.138516   69904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:51:37.154747   69904 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:51:37.154812   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:37.154860   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-093771 minikube.k8s.io/updated_at=2024_09_24T19_51_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=default-k8s-diff-port-093771 minikube.k8s.io/primary=true
	I0924 19:51:37.178892   69904 ops.go:34] apiserver oom_adj: -16
	I0924 19:51:37.364019   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:37.864960   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:38.364223   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:38.864189   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:39.365144   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:39.864326   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:40.364143   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:40.864333   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:41.364236   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:41.461496   69904 kubeadm.go:1113] duration metric: took 4.30674912s to wait for elevateKubeSystemPrivileges
	I0924 19:51:41.461536   69904 kubeadm.go:394] duration metric: took 4m59.728895745s to StartCluster
	I0924 19:51:41.461557   69904 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:51:41.461654   69904 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:51:41.464153   69904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:51:41.464416   69904 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:51:41.464620   69904 config.go:182] Loaded profile config "default-k8s-diff-port-093771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:51:41.464553   69904 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:51:41.464699   69904 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464718   69904 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-093771"
	I0924 19:51:41.464722   69904 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464753   69904 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-093771"
	I0924 19:51:41.464753   69904 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464774   69904 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-093771"
	W0924 19:51:41.464786   69904 addons.go:243] addon metrics-server should already be in state true
	I0924 19:51:41.464824   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	W0924 19:51:41.464729   69904 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:51:41.464894   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	I0924 19:51:41.465192   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465211   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465211   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465242   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.465280   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.465229   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.466016   69904 out.go:177] * Verifying Kubernetes components...
	I0924 19:51:41.467370   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:51:41.480937   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0924 19:51:41.481105   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46867
	I0924 19:51:41.481377   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.481596   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.482008   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.482032   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.482119   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.482139   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.482420   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.482453   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.482636   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.483038   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.483079   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.484535   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
	I0924 19:51:41.486427   69904 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-093771"
	W0924 19:51:41.486572   69904 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:51:41.486612   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	I0924 19:51:41.486941   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.487097   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.487145   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.487517   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.487536   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.487866   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.488447   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.488493   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.502934   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0924 19:51:41.503244   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45593
	I0924 19:51:41.503446   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.503810   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.503904   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.503920   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.504266   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.504281   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.504327   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.504742   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.504768   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.505104   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.505295   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.508446   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I0924 19:51:41.508449   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.508839   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.509365   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.509388   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.509739   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.509898   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.510390   69904 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:51:41.511622   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.511801   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:51:41.511818   69904 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:51:41.511838   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.513430   69904 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:51:41.514819   69904 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:51:41.514853   69904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:51:41.514871   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.515131   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.515838   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.515903   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.515983   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.516096   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.516270   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.516423   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.518636   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.519167   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.519192   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.519477   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.519709   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.519885   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.520037   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.522168   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0924 19:51:41.522719   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.523336   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.523360   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.523663   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.523857   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.525469   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.525702   69904 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:51:41.525718   69904 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:51:41.525738   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.528613   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.529122   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.529142   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.529384   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.529572   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.529764   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.529913   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.666584   69904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:51:41.685485   69904 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-093771" to be "Ready" ...
	I0924 19:51:41.701712   69904 node_ready.go:49] node "default-k8s-diff-port-093771" has status "Ready":"True"
	I0924 19:51:41.701735   69904 node_ready.go:38] duration metric: took 16.218729ms for node "default-k8s-diff-port-093771" to be "Ready" ...
	I0924 19:51:41.701745   69904 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:41.732271   69904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:41.759846   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:51:41.850208   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:51:41.854353   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:51:41.854372   69904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:51:41.884080   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:51:41.884109   69904 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:51:41.924130   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:51:41.924161   69904 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:51:41.956667   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.956699   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.957030   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.957044   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.957051   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.957058   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.957319   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.957378   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.957353   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:41.964614   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.964632   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.964934   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.964953   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.988158   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:51:42.871520   69904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.021277105s)
	I0924 19:51:42.871575   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:42.871586   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:42.871871   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:42.871892   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:42.871905   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:42.871918   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:42.872184   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:42.872237   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:42.872259   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:43.106973   69904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.118760493s)
	I0924 19:51:43.107032   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:43.107047   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:43.107342   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:43.107375   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:43.107389   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:43.107403   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:43.107414   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:43.107682   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:43.107697   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:43.107715   69904 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-093771"
	I0924 19:51:43.109818   69904 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0924 19:51:43.111542   69904 addons.go:510] duration metric: took 1.646997004s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0924 19:51:43.738989   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:45.738584   69904 pod_ready.go:93] pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:45.738610   69904 pod_ready.go:82] duration metric: took 4.006305736s for pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:45.738622   69904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:47.746429   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:50.864744   70152 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:51:50.865098   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:51:50.865318   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:51:50.245581   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:51.745840   69904 pod_ready.go:93] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.745870   69904 pod_ready.go:82] duration metric: took 6.007240203s for pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.745888   69904 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.754529   69904 pod_ready.go:93] pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.754556   69904 pod_ready.go:82] duration metric: took 8.660403ms for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.754569   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.764561   69904 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.764589   69904 pod_ready.go:82] duration metric: took 10.010012ms for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.764603   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.771177   69904 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.771205   69904 pod_ready.go:82] duration metric: took 6.593267ms for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.771218   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5rw7b" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.775929   69904 pod_ready.go:93] pod "kube-proxy-5rw7b" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.775952   69904 pod_ready.go:82] duration metric: took 4.726185ms for pod "kube-proxy-5rw7b" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.775964   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:52.143343   69904 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:52.143367   69904 pod_ready.go:82] duration metric: took 367.395759ms for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:52.143375   69904 pod_ready.go:39] duration metric: took 10.441621626s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:52.143388   69904 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:51:52.143433   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:52.157316   69904 api_server.go:72] duration metric: took 10.69286406s to wait for apiserver process to appear ...
	I0924 19:51:52.157344   69904 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:51:52.157363   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:51:52.162550   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 200:
	ok
	I0924 19:51:52.163431   69904 api_server.go:141] control plane version: v1.31.1
	I0924 19:51:52.163453   69904 api_server.go:131] duration metric: took 6.102223ms to wait for apiserver health ...
	I0924 19:51:52.163465   69904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:51:52.346998   69904 system_pods.go:59] 9 kube-system pods found
	I0924 19:51:52.347026   69904 system_pods.go:61] "coredns-7c65d6cfc9-87t62" [b4be73eb-defb-4cc1-84f7-d34dccab4a2c] Running
	I0924 19:51:52.347031   69904 system_pods.go:61] "coredns-7c65d6cfc9-nzssp" [ecf276cd-9aa0-4a0b-81b6-da38271d10ed] Running
	I0924 19:51:52.347036   69904 system_pods.go:61] "etcd-default-k8s-diff-port-093771" [809f2c90-7cfc-4c77-a078-7883a7c6f2ac] Running
	I0924 19:51:52.347039   69904 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-093771" [2d297125-52bd-4c17-ab57-89911bb046e7] Running
	I0924 19:51:52.347043   69904 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-093771" [9e3c3d16-5e5d-4ebf-9ade-24cb40b9e836] Running
	I0924 19:51:52.347046   69904 system_pods.go:61] "kube-proxy-5rw7b" [f2916b6c-1a6f-4766-8543-0d846f559710] Running
	I0924 19:51:52.347049   69904 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-093771" [d1db09ad-d2e9-4453-b354-379bbb4081bf] Running
	I0924 19:51:52.347055   69904 system_pods.go:61] "metrics-server-6867b74b74-gnlkd" [a3b6c4f7-47e1-48a3-adff-1690db5cea3b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:52.347059   69904 system_pods.go:61] "storage-provisioner" [591605b2-de7e-4dc1-903b-f8102ccc3770] Running
	I0924 19:51:52.347067   69904 system_pods.go:74] duration metric: took 183.595946ms to wait for pod list to return data ...
	I0924 19:51:52.347074   69904 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:51:52.542476   69904 default_sa.go:45] found service account: "default"
	I0924 19:51:52.542504   69904 default_sa.go:55] duration metric: took 195.421838ms for default service account to be created ...
	I0924 19:51:52.542514   69904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:51:52.747902   69904 system_pods.go:86] 9 kube-system pods found
	I0924 19:51:52.747936   69904 system_pods.go:89] "coredns-7c65d6cfc9-87t62" [b4be73eb-defb-4cc1-84f7-d34dccab4a2c] Running
	I0924 19:51:52.747943   69904 system_pods.go:89] "coredns-7c65d6cfc9-nzssp" [ecf276cd-9aa0-4a0b-81b6-da38271d10ed] Running
	I0924 19:51:52.747950   69904 system_pods.go:89] "etcd-default-k8s-diff-port-093771" [809f2c90-7cfc-4c77-a078-7883a7c6f2ac] Running
	I0924 19:51:52.747955   69904 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-093771" [2d297125-52bd-4c17-ab57-89911bb046e7] Running
	I0924 19:51:52.747961   69904 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-093771" [9e3c3d16-5e5d-4ebf-9ade-24cb40b9e836] Running
	I0924 19:51:52.747966   69904 system_pods.go:89] "kube-proxy-5rw7b" [f2916b6c-1a6f-4766-8543-0d846f559710] Running
	I0924 19:51:52.747971   69904 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-093771" [d1db09ad-d2e9-4453-b354-379bbb4081bf] Running
	I0924 19:51:52.747981   69904 system_pods.go:89] "metrics-server-6867b74b74-gnlkd" [a3b6c4f7-47e1-48a3-adff-1690db5cea3b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:52.747988   69904 system_pods.go:89] "storage-provisioner" [591605b2-de7e-4dc1-903b-f8102ccc3770] Running
	I0924 19:51:52.748002   69904 system_pods.go:126] duration metric: took 205.481542ms to wait for k8s-apps to be running ...
	I0924 19:51:52.748010   69904 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:51:52.748069   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:52.763092   69904 system_svc.go:56] duration metric: took 15.071727ms WaitForService to wait for kubelet
	I0924 19:51:52.763121   69904 kubeadm.go:582] duration metric: took 11.298674484s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:51:52.763141   69904 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:51:52.942890   69904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:51:52.942915   69904 node_conditions.go:123] node cpu capacity is 2
	I0924 19:51:52.942925   69904 node_conditions.go:105] duration metric: took 179.779826ms to run NodePressure ...
	I0924 19:51:52.942935   69904 start.go:241] waiting for startup goroutines ...
	I0924 19:51:52.942941   69904 start.go:246] waiting for cluster config update ...
	I0924 19:51:52.942951   69904 start.go:255] writing updated cluster config ...
	I0924 19:51:52.943201   69904 ssh_runner.go:195] Run: rm -f paused
	I0924 19:51:52.992952   69904 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:51:52.995076   69904 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-093771" cluster and "default" namespace by default
	I0924 19:51:55.865870   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:51:55.866074   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:52:02.110619   69408 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.160686078s)
	I0924 19:52:02.110702   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:52:02.124706   69408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:52:02.133983   69408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:52:02.142956   69408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:52:02.142980   69408 kubeadm.go:157] found existing configuration files:
	
	I0924 19:52:02.143027   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:52:02.151037   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:52:02.151101   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:52:02.160469   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:52:02.168827   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:52:02.168889   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:52:02.177644   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:52:02.186999   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:52:02.187064   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:52:02.195935   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:52:02.204688   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:52:02.204763   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:52:02.213564   69408 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:52:02.259426   69408 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 19:52:02.259587   69408 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:52:02.355605   69408 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:52:02.355774   69408 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:52:02.355928   69408 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 19:52:02.363355   69408 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:52:02.365307   69408 out.go:235]   - Generating certificates and keys ...
	I0924 19:52:02.365423   69408 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:52:02.365526   69408 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:52:02.365688   69408 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:52:02.365773   69408 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:52:02.365879   69408 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:52:02.365955   69408 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:52:02.366061   69408 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:52:02.366149   69408 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:52:02.366257   69408 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:52:02.366362   69408 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:52:02.366417   69408 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:52:02.366502   69408 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:52:02.551857   69408 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:52:02.836819   69408 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 19:52:03.096479   69408 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:52:03.209489   69408 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:52:03.274701   69408 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:52:03.275214   69408 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:52:03.277917   69408 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:52:03.279804   69408 out.go:235]   - Booting up control plane ...
	I0924 19:52:03.279909   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:52:03.280022   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:52:03.280130   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:52:03.297451   69408 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:52:03.304789   69408 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:52:03.304840   69408 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:52:03.423280   69408 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 19:52:03.423394   69408 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 19:52:03.925128   69408 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.985266ms
	I0924 19:52:03.925262   69408 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 19:52:05.866171   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:52:05.866441   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:52:08.429070   69408 kubeadm.go:310] [api-check] The API server is healthy after 4.502084393s
	I0924 19:52:08.439108   69408 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 19:52:08.455261   69408 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 19:52:08.479883   69408 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 19:52:08.480145   69408 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-311319 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 19:52:08.490294   69408 kubeadm.go:310] [bootstrap-token] Using token: ugx0qk.6i7lm67tfu0foozy
	I0924 19:52:08.491600   69408 out.go:235]   - Configuring RBAC rules ...
	I0924 19:52:08.491741   69408 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 19:52:08.496142   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 19:52:08.502704   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 19:52:08.508752   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 19:52:08.512088   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 19:52:08.515855   69408 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 19:52:08.837286   69408 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 19:52:09.278937   69408 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 19:52:09.835442   69408 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 19:52:09.836889   69408 kubeadm.go:310] 
	I0924 19:52:09.836953   69408 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 19:52:09.836967   69408 kubeadm.go:310] 
	I0924 19:52:09.837040   69408 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 19:52:09.837048   69408 kubeadm.go:310] 
	I0924 19:52:09.837068   69408 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 19:52:09.837117   69408 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 19:52:09.837167   69408 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 19:52:09.837174   69408 kubeadm.go:310] 
	I0924 19:52:09.837238   69408 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 19:52:09.837246   69408 kubeadm.go:310] 
	I0924 19:52:09.837297   69408 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 19:52:09.837307   69408 kubeadm.go:310] 
	I0924 19:52:09.837371   69408 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 19:52:09.837490   69408 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 19:52:09.837611   69408 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 19:52:09.837630   69408 kubeadm.go:310] 
	I0924 19:52:09.837706   69408 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 19:52:09.837774   69408 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 19:52:09.837780   69408 kubeadm.go:310] 
	I0924 19:52:09.837851   69408 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ugx0qk.6i7lm67tfu0foozy \
	I0924 19:52:09.837951   69408 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 19:52:09.837979   69408 kubeadm.go:310] 	--control-plane 
	I0924 19:52:09.837992   69408 kubeadm.go:310] 
	I0924 19:52:09.838087   69408 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 19:52:09.838100   69408 kubeadm.go:310] 
	I0924 19:52:09.838204   69408 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ugx0qk.6i7lm67tfu0foozy \
	I0924 19:52:09.838325   69408 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 19:52:09.839629   69408 kubeadm.go:310] W0924 19:52:02.243473    2529 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:52:09.839919   69408 kubeadm.go:310] W0924 19:52:02.244730    2529 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:52:09.840040   69408 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:52:09.840056   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:52:09.840067   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:52:09.842039   69408 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:52:09.843562   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:52:09.855620   69408 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:52:09.873291   69408 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:52:09.873381   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:09.873401   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-311319 minikube.k8s.io/updated_at=2024_09_24T19_52_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=embed-certs-311319 minikube.k8s.io/primary=true
	I0924 19:52:09.898351   69408 ops.go:34] apiserver oom_adj: -16
	I0924 19:52:10.043641   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:10.544445   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:11.043725   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:11.543862   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:12.043769   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:12.543723   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:13.044577   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:13.544545   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.043885   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.544454   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.663140   69408 kubeadm.go:1113] duration metric: took 4.789827964s to wait for elevateKubeSystemPrivileges
	I0924 19:52:14.663181   69408 kubeadm.go:394] duration metric: took 4m55.527467072s to StartCluster
	I0924 19:52:14.663202   69408 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:52:14.663295   69408 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:52:14.665852   69408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:52:14.666123   69408 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:52:14.666181   69408 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:52:14.666281   69408 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-311319"
	I0924 19:52:14.666302   69408 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-311319"
	I0924 19:52:14.666298   69408 addons.go:69] Setting default-storageclass=true in profile "embed-certs-311319"
	W0924 19:52:14.666315   69408 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:52:14.666324   69408 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-311319"
	I0924 19:52:14.666347   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.666357   69408 config.go:182] Loaded profile config "embed-certs-311319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:52:14.666407   69408 addons.go:69] Setting metrics-server=true in profile "embed-certs-311319"
	I0924 19:52:14.666424   69408 addons.go:234] Setting addon metrics-server=true in "embed-certs-311319"
	W0924 19:52:14.666432   69408 addons.go:243] addon metrics-server should already be in state true
	I0924 19:52:14.666462   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.666762   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666766   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666803   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.666863   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.666899   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666937   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.667748   69408 out.go:177] * Verifying Kubernetes components...
	I0924 19:52:14.669166   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:52:14.684612   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39209
	I0924 19:52:14.684876   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0924 19:52:14.685146   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.685266   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.685645   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.685662   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.685689   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35475
	I0924 19:52:14.685786   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.685806   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.686027   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.686034   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.686125   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.686517   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.686559   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.686617   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.686617   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.686638   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.686666   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.687118   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.687348   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.690029   69408 addons.go:234] Setting addon default-storageclass=true in "embed-certs-311319"
	W0924 19:52:14.690047   69408 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:52:14.690067   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.690357   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.690389   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.705119   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0924 19:52:14.705473   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42153
	I0924 19:52:14.705613   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.705983   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.706260   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.706283   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.706433   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.706458   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.706673   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.706793   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.706937   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.706989   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.708118   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I0924 19:52:14.708552   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.708751   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.709269   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.709288   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.709312   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.709894   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.710364   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.710405   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.710737   69408 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:52:14.710846   69408 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:52:14.711925   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:52:14.711937   69408 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:52:14.711951   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.712493   69408 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:52:14.712506   69408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:52:14.712521   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.716365   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716390   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.716402   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716511   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.716639   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.716738   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.716763   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716820   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.717468   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.717490   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.717691   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.717856   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.718038   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.718356   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.729081   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43771
	I0924 19:52:14.729516   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.730022   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.730040   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.730363   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.730541   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.732272   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.732526   69408 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:52:14.732545   69408 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:52:14.732564   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.735618   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.736196   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.736220   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.736269   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.736499   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.736675   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.736823   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.869932   69408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:52:14.906644   69408 node_ready.go:35] waiting up to 6m0s for node "embed-certs-311319" to be "Ready" ...
	I0924 19:52:14.914856   69408 node_ready.go:49] node "embed-certs-311319" has status "Ready":"True"
	I0924 19:52:14.914884   69408 node_ready.go:38] duration metric: took 8.205314ms for node "embed-certs-311319" to be "Ready" ...
	I0924 19:52:14.914893   69408 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:52:14.919969   69408 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:15.014078   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:52:15.014101   69408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:52:15.052737   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:52:15.064467   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:52:15.065858   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:52:15.065877   69408 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:52:15.137882   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:52:15.137902   69408 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:52:15.222147   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:52:15.331245   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.331279   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.331622   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.331647   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.331656   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.331664   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.331624   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:15.331894   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.331910   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.331898   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:15.339921   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.339937   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.340159   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.340203   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.340235   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.048748   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.048769   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.049094   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.049133   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.049144   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.049152   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.049159   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.049489   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.049524   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.049544   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.149500   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.149522   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.149817   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.149877   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.149903   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.149917   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.149926   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.150145   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.150159   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.150182   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.150191   69408 addons.go:475] Verifying addon metrics-server=true in "embed-certs-311319"
	I0924 19:52:16.151648   69408 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0924 19:52:16.153171   69408 addons.go:510] duration metric: took 1.486993032s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0924 19:52:16.925437   69408 pod_ready.go:103] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"False"
	I0924 19:52:18.926343   69408 pod_ready.go:103] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"False"
	I0924 19:52:20.928047   69408 pod_ready.go:93] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.928068   69408 pod_ready.go:82] duration metric: took 6.008077725s for pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.928076   69408 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.933100   69408 pod_ready.go:93] pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.933119   69408 pod_ready.go:82] duration metric: took 5.035858ms for pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.933127   69408 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.938200   69408 pod_ready.go:93] pod "etcd-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.938215   69408 pod_ready.go:82] duration metric: took 5.082837ms for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.938223   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.942124   69408 pod_ready.go:93] pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.942143   69408 pod_ready.go:82] duration metric: took 3.912415ms for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.942154   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.946306   69408 pod_ready.go:93] pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.946323   69408 pod_ready.go:82] duration metric: took 4.162782ms for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.946330   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h42s7" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.323768   69408 pod_ready.go:93] pod "kube-proxy-h42s7" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:21.323794   69408 pod_ready.go:82] duration metric: took 377.456852ms for pod "kube-proxy-h42s7" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.323806   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.723714   69408 pod_ready.go:93] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:21.723742   69408 pod_ready.go:82] duration metric: took 399.928048ms for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.723752   69408 pod_ready.go:39] duration metric: took 6.808848583s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:52:21.723769   69408 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:52:21.723835   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:52:21.738273   69408 api_server.go:72] duration metric: took 7.072120167s to wait for apiserver process to appear ...
	I0924 19:52:21.738301   69408 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:52:21.738353   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:52:21.743391   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 200:
	ok
	I0924 19:52:21.744346   69408 api_server.go:141] control plane version: v1.31.1
	I0924 19:52:21.744361   69408 api_server.go:131] duration metric: took 6.053884ms to wait for apiserver health ...
	I0924 19:52:21.744368   69408 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:52:21.926453   69408 system_pods.go:59] 9 kube-system pods found
	I0924 19:52:21.926485   69408 system_pods.go:61] "coredns-7c65d6cfc9-jsvdk" [da741136-c1ce-436f-9df0-e447b067265f] Running
	I0924 19:52:21.926493   69408 system_pods.go:61] "coredns-7c65d6cfc9-qgfvt" [7e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74] Running
	I0924 19:52:21.926499   69408 system_pods.go:61] "etcd-embed-certs-311319" [543c64c6-453b-4d42-b6a8-5b25577b3b8a] Running
	I0924 19:52:21.926505   69408 system_pods.go:61] "kube-apiserver-embed-certs-311319" [c1cd4c65-07a6-4d53-8f1d-438a8efdcdfa] Running
	I0924 19:52:21.926510   69408 system_pods.go:61] "kube-controller-manager-embed-certs-311319" [eece1531-5f24-4853-9e91-ca29558f3b9d] Running
	I0924 19:52:21.926517   69408 system_pods.go:61] "kube-proxy-h42s7" [76930a49-6a8a-4d02-84b8-8e26f3196ac3] Running
	I0924 19:52:21.926522   69408 system_pods.go:61] "kube-scheduler-embed-certs-311319" [22d20361-552d-4443-bec2-e406919d2966] Running
	I0924 19:52:21.926531   69408 system_pods.go:61] "metrics-server-6867b74b74-xnwm4" [dc64f26b-e4a6-4692-83d5-e6c794c1b130] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:52:21.926540   69408 system_pods.go:61] "storage-provisioner" [766bdfe2-684a-47de-94fd-088795b60e2b] Running
	I0924 19:52:21.926551   69408 system_pods.go:74] duration metric: took 182.176397ms to wait for pod list to return data ...
	I0924 19:52:21.926562   69408 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:52:22.123871   69408 default_sa.go:45] found service account: "default"
	I0924 19:52:22.123896   69408 default_sa.go:55] duration metric: took 197.328478ms for default service account to be created ...
	I0924 19:52:22.123911   69408 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:52:22.327585   69408 system_pods.go:86] 9 kube-system pods found
	I0924 19:52:22.327616   69408 system_pods.go:89] "coredns-7c65d6cfc9-jsvdk" [da741136-c1ce-436f-9df0-e447b067265f] Running
	I0924 19:52:22.327625   69408 system_pods.go:89] "coredns-7c65d6cfc9-qgfvt" [7e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74] Running
	I0924 19:52:22.327630   69408 system_pods.go:89] "etcd-embed-certs-311319" [543c64c6-453b-4d42-b6a8-5b25577b3b8a] Running
	I0924 19:52:22.327636   69408 system_pods.go:89] "kube-apiserver-embed-certs-311319" [c1cd4c65-07a6-4d53-8f1d-438a8efdcdfa] Running
	I0924 19:52:22.327641   69408 system_pods.go:89] "kube-controller-manager-embed-certs-311319" [eece1531-5f24-4853-9e91-ca29558f3b9d] Running
	I0924 19:52:22.327647   69408 system_pods.go:89] "kube-proxy-h42s7" [76930a49-6a8a-4d02-84b8-8e26f3196ac3] Running
	I0924 19:52:22.327652   69408 system_pods.go:89] "kube-scheduler-embed-certs-311319" [22d20361-552d-4443-bec2-e406919d2966] Running
	I0924 19:52:22.327662   69408 system_pods.go:89] "metrics-server-6867b74b74-xnwm4" [dc64f26b-e4a6-4692-83d5-e6c794c1b130] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:52:22.327671   69408 system_pods.go:89] "storage-provisioner" [766bdfe2-684a-47de-94fd-088795b60e2b] Running
	I0924 19:52:22.327680   69408 system_pods.go:126] duration metric: took 203.762675ms to wait for k8s-apps to be running ...
	I0924 19:52:22.327687   69408 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:52:22.327741   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:52:22.340873   69408 system_svc.go:56] duration metric: took 13.177605ms WaitForService to wait for kubelet
	I0924 19:52:22.340903   69408 kubeadm.go:582] duration metric: took 7.674755249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:52:22.340925   69408 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:52:22.524647   69408 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:52:22.524670   69408 node_conditions.go:123] node cpu capacity is 2
	I0924 19:52:22.524679   69408 node_conditions.go:105] duration metric: took 183.74973ms to run NodePressure ...
	I0924 19:52:22.524688   69408 start.go:241] waiting for startup goroutines ...
	I0924 19:52:22.524695   69408 start.go:246] waiting for cluster config update ...
	I0924 19:52:22.524705   69408 start.go:255] writing updated cluster config ...
	I0924 19:52:22.524994   69408 ssh_runner.go:195] Run: rm -f paused
	I0924 19:52:22.571765   69408 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:52:22.574724   69408 out.go:177] * Done! kubectl is now configured to use "embed-certs-311319" cluster and "default" namespace by default
	I0924 19:52:25.866986   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:52:25.867227   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:05.868563   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:05.868798   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:05.868811   70152 kubeadm.go:310] 
	I0924 19:53:05.868866   70152 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:53:05.868927   70152 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:53:05.868936   70152 kubeadm.go:310] 
	I0924 19:53:05.868989   70152 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:53:05.869037   70152 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:53:05.869201   70152 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:53:05.869212   70152 kubeadm.go:310] 
	I0924 19:53:05.869332   70152 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:53:05.869380   70152 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:53:05.869433   70152 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:53:05.869442   70152 kubeadm.go:310] 
	I0924 19:53:05.869555   70152 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:53:05.869664   70152 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:53:05.869674   70152 kubeadm.go:310] 
	I0924 19:53:05.869792   70152 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:53:05.869900   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:53:05.870003   70152 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:53:05.870132   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:53:05.870172   70152 kubeadm.go:310] 
	I0924 19:53:05.870425   70152 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:53:05.870536   70152 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:53:05.870658   70152 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 19:53:05.870869   70152 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 19:53:05.870918   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:53:06.301673   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:53:06.316103   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:53:06.326362   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:53:06.326396   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:53:06.326454   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:53:06.334687   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:53:06.334744   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:53:06.344175   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:53:06.352663   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:53:06.352725   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:53:06.361955   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:53:06.370584   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:53:06.370625   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:53:06.379590   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:53:06.388768   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:53:06.388825   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:53:06.397242   70152 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:53:06.469463   70152 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:53:06.469547   70152 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:53:06.606743   70152 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:53:06.606900   70152 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:53:06.607021   70152 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:53:06.778104   70152 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:53:06.780036   70152 out.go:235]   - Generating certificates and keys ...
	I0924 19:53:06.780148   70152 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:53:06.780241   70152 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:53:06.780359   70152 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:53:06.780451   70152 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:53:06.780578   70152 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:53:06.780654   70152 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:53:06.780753   70152 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:53:06.780852   70152 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:53:06.780972   70152 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:53:06.781119   70152 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:53:06.781178   70152 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:53:06.781254   70152 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:53:06.836315   70152 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:53:06.938657   70152 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:53:07.273070   70152 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:53:07.347309   70152 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:53:07.369112   70152 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:53:07.369777   70152 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:53:07.369866   70152 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:53:07.504122   70152 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:53:07.506006   70152 out.go:235]   - Booting up control plane ...
	I0924 19:53:07.506117   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:53:07.509132   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:53:07.509998   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:53:07.510662   70152 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:53:07.513856   70152 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:53:47.515377   70152 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:53:47.515684   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:47.515976   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:52.516646   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:52.516842   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:54:02.517539   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:54:02.517808   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:54:22.518364   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:54:22.518605   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:55:02.517378   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:55:02.517642   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:55:02.517672   70152 kubeadm.go:310] 
	I0924 19:55:02.517732   70152 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:55:02.517791   70152 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:55:02.517802   70152 kubeadm.go:310] 
	I0924 19:55:02.517880   70152 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:55:02.517943   70152 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:55:02.518090   70152 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:55:02.518102   70152 kubeadm.go:310] 
	I0924 19:55:02.518239   70152 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:55:02.518289   70152 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:55:02.518347   70152 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:55:02.518358   70152 kubeadm.go:310] 
	I0924 19:55:02.518488   70152 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:55:02.518565   70152 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:55:02.518572   70152 kubeadm.go:310] 
	I0924 19:55:02.518685   70152 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:55:02.518768   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:55:02.518891   70152 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:55:02.518991   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:55:02.519010   70152 kubeadm.go:310] 
	I0924 19:55:02.519626   70152 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:55:02.519745   70152 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:55:02.519839   70152 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 19:55:02.519914   70152 kubeadm.go:394] duration metric: took 8m1.249852968s to StartCluster
	I0924 19:55:02.519952   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:55:02.520008   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:55:02.552844   70152 cri.go:89] found id: ""
	I0924 19:55:02.552880   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.552891   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:55:02.552899   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:55:02.552956   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:55:02.582811   70152 cri.go:89] found id: ""
	I0924 19:55:02.582858   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.582869   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:55:02.582876   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:55:02.582929   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:55:02.614815   70152 cri.go:89] found id: ""
	I0924 19:55:02.614858   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.614868   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:55:02.614874   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:55:02.614920   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:55:02.644953   70152 cri.go:89] found id: ""
	I0924 19:55:02.644982   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.644991   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:55:02.644998   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:55:02.645053   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:55:02.680419   70152 cri.go:89] found id: ""
	I0924 19:55:02.680448   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.680458   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:55:02.680466   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:55:02.680525   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:55:02.713021   70152 cri.go:89] found id: ""
	I0924 19:55:02.713043   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.713051   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:55:02.713057   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:55:02.713118   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:55:02.748326   70152 cri.go:89] found id: ""
	I0924 19:55:02.748350   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.748358   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:55:02.748364   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:55:02.748416   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:55:02.780489   70152 cri.go:89] found id: ""
	I0924 19:55:02.780523   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.780546   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:55:02.780558   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:55:02.780572   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:55:02.830514   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:55:02.830550   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:55:02.845321   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:55:02.845349   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:55:02.909352   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:55:02.909383   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:55:02.909399   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:55:03.033937   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:55:03.033972   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0924 19:55:03.070531   70152 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 19:55:03.070611   70152 out.go:270] * 
	W0924 19:55:03.070682   70152 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:55:03.070701   70152 out.go:270] * 
	W0924 19:55:03.071559   70152 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 19:55:03.074921   70152 out.go:201] 
	W0924 19:55:03.076106   70152 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:55:03.076150   70152 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 19:55:03.076180   70152 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 19:55:03.077787   70152 out.go:201] 
	
	
	==> CRI-O <==
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.896741119Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207704896720923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96e1b204-ee61-4edb-80cf-30b97c1d90f3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.898615651Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7acbbff6-87da-4117-b855-574bfa7aef7a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.898670639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7acbbff6-87da-4117-b855-574bfa7aef7a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.898712427Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7acbbff6-87da-4117-b855-574bfa7aef7a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.928906789Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df67565f-863a-4973-918f-36a2774fbef5 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.928989062Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df67565f-863a-4973-918f-36a2774fbef5 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.929828555Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cb437493-75be-4199-883f-b6ebb45d2fe8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.930209300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207704930188728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb437493-75be-4199-883f-b6ebb45d2fe8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.930713927Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cd954c0-c6ad-407a-9cf0-4d6bca55d75f name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.930772375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cd954c0-c6ad-407a-9cf0-4d6bca55d75f name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.930806229Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4cd954c0-c6ad-407a-9cf0-4d6bca55d75f name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.959264266Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0c28323a-8aad-452b-ab6e-76dd7b4d4cd1 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.959349155Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0c28323a-8aad-452b-ab6e-76dd7b4d4cd1 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.960277696Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64042d16-01e2-4239-a976-1d4152ee0bf2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.960676199Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207704960650695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64042d16-01e2-4239-a976-1d4152ee0bf2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.961254625Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0af3aa1d-bf24-4cba-bdea-d25476558f75 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.961337388Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0af3aa1d-bf24-4cba-bdea-d25476558f75 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.961387902Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0af3aa1d-bf24-4cba-bdea-d25476558f75 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.991513406Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ffde0f9e-67ec-4ffd-980f-0fa2cfcb23b3 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.991585878Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ffde0f9e-67ec-4ffd-980f-0fa2cfcb23b3 name=/runtime.v1.RuntimeService/Version
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.992929228Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=821a12b3-60fe-4b0d-a30a-cee3958fe783 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.993382857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207704993358452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=821a12b3-60fe-4b0d-a30a-cee3958fe783 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.994038463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87497914-a88d-4b8b-90a1-6b3f0c3cb4cf name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.994126627Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87497914-a88d-4b8b-90a1-6b3f0c3cb4cf name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 19:55:04 old-k8s-version-510301 crio[623]: time="2024-09-24 19:55:04.994160768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=87497914-a88d-4b8b-90a1-6b3f0c3cb4cf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep24 19:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048604] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037476] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.005649] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.876766] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.596648] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.634241] systemd-fstab-generator[549]: Ignoring "noauto" option for root device
	[  +0.054570] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058966] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.197243] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.130135] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.272038] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[Sep24 19:47] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.061152] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.778061] systemd-fstab-generator[996]: Ignoring "noauto" option for root device
	[ +15.063261] kauditd_printk_skb: 46 callbacks suppressed
	[Sep24 19:51] systemd-fstab-generator[5117]: Ignoring "noauto" option for root device
	[Sep24 19:53] systemd-fstab-generator[5405]: Ignoring "noauto" option for root device
	[  +0.064427] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:55:05 up 8 min,  0 users,  load average: 0.18, 0.18, 0.10
	Linux old-k8s-version-510301 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5584]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000adcb60, 0xc0001020c0)
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:218
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5584]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5584]: goroutine 153 [syscall]:
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5584]: syscall.Syscall6(0xe8, 0xd, 0xc000cd9b6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5584]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5584]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xd, 0xc000cd9b6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5584]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000b72d80, 0x0, 0x0, 0x0)
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5584]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0000515e0)
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5584]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Sep 24 19:55:02 old-k8s-version-510301 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 24 19:55:02 old-k8s-version-510301 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 24 19:55:02 old-k8s-version-510301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 24 19:55:02 old-k8s-version-510301 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 24 19:55:02 old-k8s-version-510301 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5644]: I0924 19:55:02.993135    5644 server.go:416] Version: v1.20.0
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5644]: I0924 19:55:02.993476    5644 server.go:837] Client rotation is on, will bootstrap in background
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5644]: I0924 19:55:02.995520    5644 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5644]: I0924 19:55:02.996463    5644 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 24 19:55:02 old-k8s-version-510301 kubelet[5644]: W0924 19:55:02.996485    5644 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510301 -n old-k8s-version-510301
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510301 -n old-k8s-version-510301: exit status 2 (235.839577ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-510301" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (740.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0924 19:51:08.228644   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:51:12.862230   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:51:44.662967   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-965745 -n no-preload-965745
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-24 20:00:06.237218078 +0000 UTC m=+6015.339229782
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965745 -n no-preload-965745
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-965745 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-965745 logs -n 25: (2.036842433s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-038637 sudo cat                              | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:38 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo find                             | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo crio                             | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-038637                                       | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-119609 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | disable-driver-mounts-119609                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:39 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-311319            | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-965745             | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-093771  | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-510301        | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-311319                 | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-965745                  | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-093771       | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:51 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-510301             | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 19:42:46
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 19:42:46.491955   70152 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:42:46.492212   70152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:42:46.492222   70152 out.go:358] Setting ErrFile to fd 2...
	I0924 19:42:46.492228   70152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:42:46.492386   70152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:42:46.492893   70152 out.go:352] Setting JSON to false
	I0924 19:42:46.493799   70152 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5117,"bootTime":1727201849,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 19:42:46.493899   70152 start.go:139] virtualization: kvm guest
	I0924 19:42:46.496073   70152 out.go:177] * [old-k8s-version-510301] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 19:42:46.497447   70152 notify.go:220] Checking for updates...
	I0924 19:42:46.497466   70152 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 19:42:46.498899   70152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 19:42:46.500315   70152 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:42:46.502038   70152 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:42:46.503591   70152 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 19:42:46.505010   70152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 19:42:46.506789   70152 config.go:182] Loaded profile config "old-k8s-version-510301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:42:46.507239   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:42:46.507282   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:42:46.522338   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43977
	I0924 19:42:46.522810   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:42:46.523430   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:42:46.523450   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:42:46.523809   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:42:46.523989   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:42:46.525830   70152 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 19:42:46.527032   70152 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 19:42:46.527327   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:42:46.527361   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:42:46.542427   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37825
	I0924 19:42:46.542782   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:42:46.543220   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:42:46.543237   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:42:46.543562   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:42:46.543731   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:42:46.577253   70152 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 19:42:46.578471   70152 start.go:297] selected driver: kvm2
	I0924 19:42:46.578486   70152 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:42:46.578620   70152 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 19:42:46.579480   70152 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:42:46.579576   70152 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 19:42:46.595023   70152 install.go:137] /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 19:42:46.595376   70152 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:42:46.595401   70152 cni.go:84] Creating CNI manager for ""
	I0924 19:42:46.595427   70152 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:42:46.595456   70152 start.go:340] cluster config:
	{Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:42:46.595544   70152 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:42:46.597600   70152 out.go:177] * Starting "old-k8s-version-510301" primary control-plane node in "old-k8s-version-510301" cluster
	I0924 19:42:49.587099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:42:46.599107   70152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:42:46.599145   70152 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 19:42:46.599157   70152 cache.go:56] Caching tarball of preloaded images
	I0924 19:42:46.599232   70152 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 19:42:46.599246   70152 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 19:42:46.599368   70152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json ...
	I0924 19:42:46.599577   70152 start.go:360] acquireMachinesLock for old-k8s-version-510301: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:42:52.659112   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:42:58.739082   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:01.811107   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:07.891031   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:10.963093   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:17.043125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:20.115055   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:26.195121   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:29.267111   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:35.347125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:38.419109   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:44.499098   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:47.571040   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:53.651128   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:56.723110   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:02.803080   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:05.875118   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:11.955117   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:15.027102   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:21.107097   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:24.179122   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:30.259099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:33.331130   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:39.411086   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:42.483063   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:48.563071   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:51.635087   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:57.715125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:00.787050   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:06.867122   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:09.939097   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:16.019098   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:19.091109   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:25.171099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:28.243075   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:34.323040   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:37.395180   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:43.475096   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:46.547060   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:52.627035   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:55.699131   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:58.703628   69576 start.go:364] duration metric: took 4m21.10107111s to acquireMachinesLock for "no-preload-965745"
	I0924 19:45:58.703677   69576 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:45:58.703682   69576 fix.go:54] fixHost starting: 
	I0924 19:45:58.704078   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:45:58.704123   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:45:58.719888   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32803
	I0924 19:45:58.720250   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:45:58.720694   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:45:58.720714   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:45:58.721073   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:45:58.721262   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:45:58.721419   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:45:58.723062   69576 fix.go:112] recreateIfNeeded on no-preload-965745: state=Stopped err=<nil>
	I0924 19:45:58.723086   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	W0924 19:45:58.723253   69576 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:45:58.725047   69576 out.go:177] * Restarting existing kvm2 VM for "no-preload-965745" ...
	I0924 19:45:58.701057   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:45:58.701123   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:45:58.701448   69408 buildroot.go:166] provisioning hostname "embed-certs-311319"
	I0924 19:45:58.701474   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:45:58.701688   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:45:58.703495   69408 machine.go:96] duration metric: took 4m37.423499364s to provisionDockerMachine
	I0924 19:45:58.703530   69408 fix.go:56] duration metric: took 4m37.446368089s for fixHost
	I0924 19:45:58.703536   69408 start.go:83] releasing machines lock for "embed-certs-311319", held for 4m37.446384972s
	W0924 19:45:58.703575   69408 start.go:714] error starting host: provision: host is not running
	W0924 19:45:58.703648   69408 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0924 19:45:58.703659   69408 start.go:729] Will try again in 5 seconds ...
	I0924 19:45:58.726232   69576 main.go:141] libmachine: (no-preload-965745) Calling .Start
	I0924 19:45:58.726397   69576 main.go:141] libmachine: (no-preload-965745) Ensuring networks are active...
	I0924 19:45:58.727100   69576 main.go:141] libmachine: (no-preload-965745) Ensuring network default is active
	I0924 19:45:58.727392   69576 main.go:141] libmachine: (no-preload-965745) Ensuring network mk-no-preload-965745 is active
	I0924 19:45:58.727758   69576 main.go:141] libmachine: (no-preload-965745) Getting domain xml...
	I0924 19:45:58.728339   69576 main.go:141] libmachine: (no-preload-965745) Creating domain...
	I0924 19:45:59.928391   69576 main.go:141] libmachine: (no-preload-965745) Waiting to get IP...
	I0924 19:45:59.929441   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:45:59.929931   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:45:59.929982   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:45:59.929905   70821 retry.go:31] will retry after 231.188723ms: waiting for machine to come up
	I0924 19:46:00.162502   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.162993   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.163021   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.162944   70821 retry.go:31] will retry after 278.953753ms: waiting for machine to come up
	I0924 19:46:00.443443   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.443868   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.443895   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.443830   70821 retry.go:31] will retry after 307.192984ms: waiting for machine to come up
	I0924 19:46:00.752227   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.752637   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.752666   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.752602   70821 retry.go:31] will retry after 596.967087ms: waiting for machine to come up
	I0924 19:46:01.351461   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:01.351906   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:01.351933   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:01.351859   70821 retry.go:31] will retry after 579.94365ms: waiting for machine to come up
	I0924 19:46:01.933682   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:01.934110   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:01.934141   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:01.934070   70821 retry.go:31] will retry after 862.980289ms: waiting for machine to come up
	I0924 19:46:03.705206   69408 start.go:360] acquireMachinesLock for embed-certs-311319: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:46:02.799129   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:02.799442   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:02.799471   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:02.799394   70821 retry.go:31] will retry after 992.898394ms: waiting for machine to come up
	I0924 19:46:03.794034   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:03.794462   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:03.794518   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:03.794440   70821 retry.go:31] will retry after 917.82796ms: waiting for machine to come up
	I0924 19:46:04.713515   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:04.713888   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:04.713911   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:04.713861   70821 retry.go:31] will retry after 1.30142733s: waiting for machine to come up
	I0924 19:46:06.017327   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:06.017868   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:06.017891   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:06.017835   70821 retry.go:31] will retry after 1.585023602s: waiting for machine to come up
	I0924 19:46:07.603787   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:07.604129   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:07.604148   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:07.604108   70821 retry.go:31] will retry after 2.382871382s: waiting for machine to come up
	I0924 19:46:09.989065   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:09.989530   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:09.989592   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:09.989504   70821 retry.go:31] will retry after 3.009655055s: waiting for machine to come up
	I0924 19:46:17.011094   69904 start.go:364] duration metric: took 3m57.677491969s to acquireMachinesLock for "default-k8s-diff-port-093771"
	I0924 19:46:17.011169   69904 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:17.011180   69904 fix.go:54] fixHost starting: 
	I0924 19:46:17.011578   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:17.011648   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:17.030756   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I0924 19:46:17.031186   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:17.031698   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:46:17.031722   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:17.032028   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:17.032198   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:17.032340   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:46:17.033737   69904 fix.go:112] recreateIfNeeded on default-k8s-diff-port-093771: state=Stopped err=<nil>
	I0924 19:46:17.033761   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	W0924 19:46:17.033912   69904 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:17.036154   69904 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-093771" ...
	I0924 19:46:13.001046   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:13.001487   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:13.001518   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:13.001448   70821 retry.go:31] will retry after 2.789870388s: waiting for machine to come up
	I0924 19:46:15.792496   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.793014   69576 main.go:141] libmachine: (no-preload-965745) Found IP for machine: 192.168.39.134
	I0924 19:46:15.793035   69576 main.go:141] libmachine: (no-preload-965745) Reserving static IP address...
	I0924 19:46:15.793051   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has current primary IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.793564   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "no-preload-965745", mac: "52:54:00:c4:4b:79", ip: "192.168.39.134"} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.793590   69576 main.go:141] libmachine: (no-preload-965745) DBG | skip adding static IP to network mk-no-preload-965745 - found existing host DHCP lease matching {name: "no-preload-965745", mac: "52:54:00:c4:4b:79", ip: "192.168.39.134"}
	I0924 19:46:15.793602   69576 main.go:141] libmachine: (no-preload-965745) Reserved static IP address: 192.168.39.134
	I0924 19:46:15.793631   69576 main.go:141] libmachine: (no-preload-965745) DBG | Getting to WaitForSSH function...
	I0924 19:46:15.793643   69576 main.go:141] libmachine: (no-preload-965745) Waiting for SSH to be available...
	I0924 19:46:15.795732   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.796002   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.796023   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.796169   69576 main.go:141] libmachine: (no-preload-965745) DBG | Using SSH client type: external
	I0924 19:46:15.796196   69576 main.go:141] libmachine: (no-preload-965745) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa (-rw-------)
	I0924 19:46:15.796227   69576 main.go:141] libmachine: (no-preload-965745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:15.796241   69576 main.go:141] libmachine: (no-preload-965745) DBG | About to run SSH command:
	I0924 19:46:15.796247   69576 main.go:141] libmachine: (no-preload-965745) DBG | exit 0
	I0924 19:46:15.922480   69576 main.go:141] libmachine: (no-preload-965745) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:15.922886   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetConfigRaw
	I0924 19:46:15.923532   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:15.925814   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.926152   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.926180   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.926341   69576 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/config.json ...
	I0924 19:46:15.926506   69576 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:15.926523   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:15.926755   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:15.929175   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.929512   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.929539   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.929647   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:15.929805   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:15.929956   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:15.930041   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:15.930184   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:15.930374   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:15.930386   69576 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:16.038990   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:16.039018   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.039241   69576 buildroot.go:166] provisioning hostname "no-preload-965745"
	I0924 19:46:16.039266   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.039459   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.042183   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.042567   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.042603   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.042728   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.042929   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.043085   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.043264   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.043431   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.043611   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.043624   69576 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-965745 && echo "no-preload-965745" | sudo tee /etc/hostname
	I0924 19:46:16.163262   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-965745
	
	I0924 19:46:16.163289   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.165935   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.166256   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.166276   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.166415   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.166602   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.166728   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.166876   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.167005   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.167219   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.167244   69576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-965745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-965745/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-965745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:16.282661   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:16.282689   69576 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:16.282714   69576 buildroot.go:174] setting up certificates
	I0924 19:46:16.282723   69576 provision.go:84] configureAuth start
	I0924 19:46:16.282734   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.283017   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:16.285665   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.286113   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.286140   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.286283   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.288440   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.288750   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.288775   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.288932   69576 provision.go:143] copyHostCerts
	I0924 19:46:16.288984   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:16.288996   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:16.289093   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:16.289206   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:16.289221   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:16.289265   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:16.289341   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:16.289350   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:16.289385   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:16.289451   69576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.no-preload-965745 san=[127.0.0.1 192.168.39.134 localhost minikube no-preload-965745]
	I0924 19:46:16.400236   69576 provision.go:177] copyRemoteCerts
	I0924 19:46:16.400302   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:16.400330   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.402770   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.403069   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.403107   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.403226   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.403415   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.403678   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.403826   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:16.488224   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:16.509856   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 19:46:16.531212   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:16.552758   69576 provision.go:87] duration metric: took 270.023746ms to configureAuth
	I0924 19:46:16.552787   69576 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:16.552980   69576 config.go:182] Loaded profile config "no-preload-965745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:16.553045   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.555463   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.555792   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.555812   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.555992   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.556190   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.556337   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.556447   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.556569   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.556756   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.556774   69576 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:16.777283   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:16.777305   69576 machine.go:96] duration metric: took 850.787273ms to provisionDockerMachine
	I0924 19:46:16.777318   69576 start.go:293] postStartSetup for "no-preload-965745" (driver="kvm2")
	I0924 19:46:16.777330   69576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:16.777348   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:16.777726   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:16.777751   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.780187   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.780591   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.780632   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.780812   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.781015   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.781163   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.781359   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:16.864642   69576 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:16.868296   69576 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:16.868317   69576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:16.868379   69576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:16.868456   69576 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:16.868549   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:16.877019   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:16.898717   69576 start.go:296] duration metric: took 121.386885ms for postStartSetup
	I0924 19:46:16.898752   69576 fix.go:56] duration metric: took 18.195069583s for fixHost
	I0924 19:46:16.898772   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.901284   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.901593   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.901620   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.901773   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.901965   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.902143   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.902278   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.902416   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.902572   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.902580   69576 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:17.010942   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207176.987992125
	
	I0924 19:46:17.010968   69576 fix.go:216] guest clock: 1727207176.987992125
	I0924 19:46:17.010977   69576 fix.go:229] Guest: 2024-09-24 19:46:16.987992125 +0000 UTC Remote: 2024-09-24 19:46:16.898755451 +0000 UTC m=+279.432619611 (delta=89.236674ms)
	I0924 19:46:17.011002   69576 fix.go:200] guest clock delta is within tolerance: 89.236674ms
	I0924 19:46:17.011008   69576 start.go:83] releasing machines lock for "no-preload-965745", held for 18.307345605s
	I0924 19:46:17.011044   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.011314   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:17.014130   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.014475   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.014510   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.014661   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015160   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015331   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015443   69576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:17.015485   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:17.015512   69576 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:17.015536   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:17.018062   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018324   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018392   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.018416   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018531   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:17.018681   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:17.018754   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.018805   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018814   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:17.018956   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:17.019039   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:17.019130   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:17.019295   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:17.019483   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:17.120138   69576 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:17.125567   69576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:17.269403   69576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:17.275170   69576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:17.275229   69576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:17.290350   69576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:17.290374   69576 start.go:495] detecting cgroup driver to use...
	I0924 19:46:17.290437   69576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:17.310059   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:17.323377   69576 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:17.323440   69576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:17.336247   69576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:17.349168   69576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:17.461240   69576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:17.606562   69576 docker.go:233] disabling docker service ...
	I0924 19:46:17.606632   69576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:17.623001   69576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:17.637472   69576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:17.778735   69576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:17.905408   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:17.921465   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:17.938193   69576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:46:17.938265   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.947686   69576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:17.947748   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.957230   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.966507   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.975768   69576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:17.985288   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.995405   69576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:18.011401   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:18.024030   69576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:18.034873   69576 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:18.034939   69576 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:18.047359   69576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:18.057288   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:18.181067   69576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:18.272703   69576 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:18.272779   69576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:18.277272   69576 start.go:563] Will wait 60s for crictl version
	I0924 19:46:18.277338   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.280914   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:18.319509   69576 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:18.319603   69576 ssh_runner.go:195] Run: crio --version
	I0924 19:46:18.349619   69576 ssh_runner.go:195] Run: crio --version
	I0924 19:46:18.376567   69576 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:46:17.037598   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Start
	I0924 19:46:17.037763   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring networks are active...
	I0924 19:46:17.038517   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring network default is active
	I0924 19:46:17.038875   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring network mk-default-k8s-diff-port-093771 is active
	I0924 19:46:17.039247   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Getting domain xml...
	I0924 19:46:17.039971   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Creating domain...
	I0924 19:46:18.369133   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting to get IP...
	I0924 19:46:18.370069   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.370537   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.370589   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.370490   70958 retry.go:31] will retry after 309.496724ms: waiting for machine to come up
	I0924 19:46:18.682355   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.682933   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.682982   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.682901   70958 retry.go:31] will retry after 274.120659ms: waiting for machine to come up
	I0924 19:46:18.958554   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.959017   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.959044   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.958981   70958 retry.go:31] will retry after 301.44935ms: waiting for machine to come up
	I0924 19:46:18.377928   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:18.380767   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:18.381227   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:18.381343   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:18.381519   69576 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:18.385510   69576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:18.398125   69576 kubeadm.go:883] updating cluster {Name:no-preload-965745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:18.398269   69576 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:46:18.398324   69576 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:18.433136   69576 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:46:18.433158   69576 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 19:46:18.433221   69576 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:18.433232   69576 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.433266   69576 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.433288   69576 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.433295   69576 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.433348   69576 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.433369   69576 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0924 19:46:18.433406   69576 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.435096   69576 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.435095   69576 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.435130   69576 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0924 19:46:18.435125   69576 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.435167   69576 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.435282   69576 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.435312   69576 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:18.435355   69576 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.586269   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.594361   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.594399   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.595814   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.600629   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.625054   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.626264   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0924 19:46:18.648420   69576 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0924 19:46:18.648471   69576 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.648519   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.736906   69576 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0924 19:46:18.736967   69576 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.736995   69576 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0924 19:46:18.737033   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.737038   69576 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.736924   69576 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0924 19:46:18.737086   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.737094   69576 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.737129   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.738294   69576 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0924 19:46:18.738322   69576 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.738372   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.759842   69576 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0924 19:46:18.759877   69576 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.759920   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.863913   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.864011   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.863924   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.863940   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.863970   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.863980   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.982915   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.982954   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.983003   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:19.005899   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:19.005922   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:19.005993   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:19.085255   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:19.085357   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:19.085385   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:19.140884   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:19.140951   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:19.141049   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:19.186906   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0924 19:46:19.187032   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.190934   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0924 19:46:19.191034   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:19.219210   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0924 19:46:19.219345   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:19.250400   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0924 19:46:19.250433   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0924 19:46:19.250510   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:19.250510   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0924 19:46:19.250541   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0924 19:46:19.250557   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.250511   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:19.250575   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0924 19:46:19.250589   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:19.250595   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0924 19:46:19.250597   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.263357   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0924 19:46:19.422736   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:21.705978   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.455378333s)
	I0924 19:46:21.706013   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.455386133s)
	I0924 19:46:21.706050   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0924 19:46:21.706075   69576 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:21.706086   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.455478137s)
	I0924 19:46:21.706116   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0924 19:46:21.706023   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0924 19:46:21.706127   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:21.706162   69576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.283401294s)
	I0924 19:46:21.706195   69576 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0924 19:46:21.706223   69576 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:21.706267   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:19.262500   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.263016   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.263065   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:19.262997   70958 retry.go:31] will retry after 463.004617ms: waiting for machine to come up
	I0924 19:46:19.727528   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.728017   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.728039   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:19.727972   70958 retry.go:31] will retry after 463.942506ms: waiting for machine to come up
	I0924 19:46:20.193614   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.194039   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.194066   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:20.193993   70958 retry.go:31] will retry after 595.200456ms: waiting for machine to come up
	I0924 19:46:20.790814   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.791264   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.791290   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:20.791229   70958 retry.go:31] will retry after 862.850861ms: waiting for machine to come up
	I0924 19:46:21.655227   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:21.655702   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:21.655732   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:21.655652   70958 retry.go:31] will retry after 1.436744818s: waiting for machine to come up
	I0924 19:46:23.093891   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:23.094619   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:23.094652   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:23.094545   70958 retry.go:31] will retry after 1.670034049s: waiting for machine to come up
	I0924 19:46:23.573866   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.867718194s)
	I0924 19:46:23.573911   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0924 19:46:23.573942   69576 ssh_runner.go:235] Completed: which crictl: (1.867653076s)
	I0924 19:46:23.574009   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:23.573947   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:23.574079   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:24.924292   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.35018601s)
	I0924 19:46:24.924325   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0924 19:46:24.924325   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.350292754s)
	I0924 19:46:24.924351   69576 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:24.924400   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:24.924400   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:24.765982   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:24.766453   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:24.766486   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:24.766399   70958 retry.go:31] will retry after 2.142103801s: waiting for machine to come up
	I0924 19:46:26.911998   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:26.912395   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:26.912425   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:26.912350   70958 retry.go:31] will retry after 1.90953864s: waiting for machine to come up
	I0924 19:46:28.823807   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:28.824294   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:28.824324   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:28.824242   70958 retry.go:31] will retry after 2.249657554s: waiting for machine to come up
	I0924 19:46:28.202705   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.278273074s)
	I0924 19:46:28.202736   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0924 19:46:28.202759   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:28.202781   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.278300546s)
	I0924 19:46:28.202798   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:28.202862   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:29.870161   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.667334937s)
	I0924 19:46:29.870195   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0924 19:46:29.870161   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.667273921s)
	I0924 19:46:29.870218   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:29.870248   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0924 19:46:29.870269   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:29.870357   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.922800   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.05250542s)
	I0924 19:46:31.922865   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0924 19:46:31.922894   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.052511751s)
	I0924 19:46:31.922928   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0924 19:46:31.922938   69576 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.922996   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.076197   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:31.076624   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:31.076660   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:31.076579   70958 retry.go:31] will retry after 3.538260641s: waiting for machine to come up
	I0924 19:46:35.823566   70152 start.go:364] duration metric: took 3m49.223945366s to acquireMachinesLock for "old-k8s-version-510301"
	I0924 19:46:35.823654   70152 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:35.823666   70152 fix.go:54] fixHost starting: 
	I0924 19:46:35.824101   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:35.824161   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:35.844327   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38055
	I0924 19:46:35.844741   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:35.845377   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:46:35.845402   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:35.845769   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:35.845997   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:35.846186   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetState
	I0924 19:46:35.847728   70152 fix.go:112] recreateIfNeeded on old-k8s-version-510301: state=Stopped err=<nil>
	I0924 19:46:35.847754   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	W0924 19:46:35.847912   70152 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:35.849981   70152 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-510301" ...
	I0924 19:46:35.851388   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .Start
	I0924 19:46:35.851573   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring networks are active...
	I0924 19:46:35.852445   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring network default is active
	I0924 19:46:35.852832   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring network mk-old-k8s-version-510301 is active
	I0924 19:46:35.853342   70152 main.go:141] libmachine: (old-k8s-version-510301) Getting domain xml...
	I0924 19:46:35.854028   70152 main.go:141] libmachine: (old-k8s-version-510301) Creating domain...
	I0924 19:46:34.618473   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.618980   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Found IP for machine: 192.168.50.116
	I0924 19:46:34.619006   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Reserving static IP address...
	I0924 19:46:34.619022   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has current primary IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.619475   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-093771", mac: "52:54:00:21:4a:f5", ip: "192.168.50.116"} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.619520   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Reserved static IP address: 192.168.50.116
	I0924 19:46:34.619540   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | skip adding static IP to network mk-default-k8s-diff-port-093771 - found existing host DHCP lease matching {name: "default-k8s-diff-port-093771", mac: "52:54:00:21:4a:f5", ip: "192.168.50.116"}
	I0924 19:46:34.619559   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Getting to WaitForSSH function...
	I0924 19:46:34.619573   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for SSH to be available...
	I0924 19:46:34.621893   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.622318   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.622346   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.622525   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Using SSH client type: external
	I0924 19:46:34.622553   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa (-rw-------)
	I0924 19:46:34.622584   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:34.622603   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | About to run SSH command:
	I0924 19:46:34.622621   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | exit 0
	I0924 19:46:34.746905   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:34.747246   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetConfigRaw
	I0924 19:46:34.747867   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:34.750507   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.751020   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.751052   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.751327   69904 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/config.json ...
	I0924 19:46:34.751516   69904 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:34.751533   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:34.751773   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.754088   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.754380   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.754400   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.754510   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.754703   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.754988   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.755201   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.755479   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.755714   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.755727   69904 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:34.854791   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:34.854816   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:34.855126   69904 buildroot.go:166] provisioning hostname "default-k8s-diff-port-093771"
	I0924 19:46:34.855157   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:34.855362   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.858116   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.858459   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.858491   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.858639   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.858821   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.859002   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.859124   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.859281   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.859444   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.859458   69904 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-093771 && echo "default-k8s-diff-port-093771" | sudo tee /etc/hostname
	I0924 19:46:34.974247   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-093771
	
	I0924 19:46:34.974285   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.977117   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.977514   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.977544   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.977781   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.978011   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.978184   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.978326   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.978512   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.978736   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.978761   69904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-093771' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-093771/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-093771' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:35.096102   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:35.096132   69904 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:35.096172   69904 buildroot.go:174] setting up certificates
	I0924 19:46:35.096182   69904 provision.go:84] configureAuth start
	I0924 19:46:35.096192   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:35.096501   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:35.099177   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.099529   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.099563   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.099743   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.102392   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.102744   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.102771   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.102941   69904 provision.go:143] copyHostCerts
	I0924 19:46:35.102988   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:35.102996   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:35.103053   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:35.103147   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:35.103155   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:35.103176   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:35.103229   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:35.103237   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:35.103255   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:35.103319   69904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-093771 san=[127.0.0.1 192.168.50.116 default-k8s-diff-port-093771 localhost minikube]
	I0924 19:46:35.213279   69904 provision.go:177] copyRemoteCerts
	I0924 19:46:35.213364   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:35.213396   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.216668   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.217114   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.217150   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.217374   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.217544   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.217759   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.217937   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.300483   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:35.323893   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0924 19:46:35.346838   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:35.368788   69904 provision.go:87] duration metric: took 272.591773ms to configureAuth
	I0924 19:46:35.368819   69904 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:35.369032   69904 config.go:182] Loaded profile config "default-k8s-diff-port-093771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:35.369107   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.372264   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.372571   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.372601   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.372833   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.373033   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.373221   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.373395   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.373595   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:35.373768   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:35.373800   69904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:35.593954   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:35.593983   69904 machine.go:96] duration metric: took 842.454798ms to provisionDockerMachine
	I0924 19:46:35.593998   69904 start.go:293] postStartSetup for "default-k8s-diff-port-093771" (driver="kvm2")
	I0924 19:46:35.594011   69904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:35.594032   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.594381   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:35.594415   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.597073   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.597475   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.597531   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.597668   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.597886   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.598061   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.598225   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.677749   69904 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:35.682185   69904 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:35.682220   69904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:35.682302   69904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:35.682402   69904 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:35.682514   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:35.692308   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:35.717006   69904 start.go:296] duration metric: took 122.993776ms for postStartSetup
	I0924 19:46:35.717045   69904 fix.go:56] duration metric: took 18.705866197s for fixHost
	I0924 19:46:35.717069   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.720111   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.720478   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.720507   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.720702   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.720913   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.721078   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.721208   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.721368   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:35.721547   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:35.721558   69904 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:35.823421   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207195.798332273
	
	I0924 19:46:35.823444   69904 fix.go:216] guest clock: 1727207195.798332273
	I0924 19:46:35.823454   69904 fix.go:229] Guest: 2024-09-24 19:46:35.798332273 +0000 UTC Remote: 2024-09-24 19:46:35.717049796 +0000 UTC m=+256.522802974 (delta=81.282477ms)
	I0924 19:46:35.823478   69904 fix.go:200] guest clock delta is within tolerance: 81.282477ms
	I0924 19:46:35.823484   69904 start.go:83] releasing machines lock for "default-k8s-diff-port-093771", held for 18.812344302s
	I0924 19:46:35.823511   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.823795   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:35.827240   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.827580   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.827612   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.827798   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828501   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828695   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828788   69904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:35.828840   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.828982   69904 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:35.829022   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.831719   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.831888   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832098   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.832125   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832350   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.832419   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.832446   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832518   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.832608   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.832688   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.832761   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.832834   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.832898   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.833000   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.913010   69904 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:35.936917   69904 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:36.082528   69904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:36.090012   69904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:36.090111   69904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:36.109409   69904 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:36.109434   69904 start.go:495] detecting cgroup driver to use...
	I0924 19:46:36.109509   69904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:36.130226   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:36.142975   69904 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:36.143037   69904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:36.159722   69904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:36.174702   69904 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:36.315361   69904 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:36.491190   69904 docker.go:233] disabling docker service ...
	I0924 19:46:36.491259   69904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:36.513843   69904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:36.530208   69904 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:36.658600   69904 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:36.806048   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:36.821825   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:36.841750   69904 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:46:36.841819   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.853349   69904 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:36.853432   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.865214   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.877600   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.889363   69904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:36.901434   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.911763   69904 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.929057   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.939719   69904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:36.949326   69904 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:36.949399   69904 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:36.969647   69904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:36.984522   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:37.132041   69904 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:37.238531   69904 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:37.238638   69904 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:37.243752   69904 start.go:563] Will wait 60s for crictl version
	I0924 19:46:37.243811   69904 ssh_runner.go:195] Run: which crictl
	I0924 19:46:37.247683   69904 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:37.282843   69904 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:37.282932   69904 ssh_runner.go:195] Run: crio --version
	I0924 19:46:37.318022   69904 ssh_runner.go:195] Run: crio --version
	I0924 19:46:37.356586   69904 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:46:32.569181   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0924 19:46:32.569229   69576 cache_images.go:123] Successfully loaded all cached images
	I0924 19:46:32.569236   69576 cache_images.go:92] duration metric: took 14.136066072s to LoadCachedImages
	I0924 19:46:32.569250   69576 kubeadm.go:934] updating node { 192.168.39.134 8443 v1.31.1 crio true true} ...
	I0924 19:46:32.569372   69576 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-965745 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:46:32.569453   69576 ssh_runner.go:195] Run: crio config
	I0924 19:46:32.610207   69576 cni.go:84] Creating CNI manager for ""
	I0924 19:46:32.610236   69576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:32.610247   69576 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:46:32.610284   69576 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.134 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-965745 NodeName:no-preload-965745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:46:32.610407   69576 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-965745"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:46:32.610465   69576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:46:32.620532   69576 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:46:32.620616   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:46:32.629642   69576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 19:46:32.644863   69576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:46:32.659420   69576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0924 19:46:32.674590   69576 ssh_runner.go:195] Run: grep 192.168.39.134	control-plane.minikube.internal$ /etc/hosts
	I0924 19:46:32.677861   69576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:32.688560   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:32.791827   69576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:32.807240   69576 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745 for IP: 192.168.39.134
	I0924 19:46:32.807266   69576 certs.go:194] generating shared ca certs ...
	I0924 19:46:32.807286   69576 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:32.807447   69576 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:46:32.807502   69576 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:46:32.807515   69576 certs.go:256] generating profile certs ...
	I0924 19:46:32.807645   69576 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/client.key
	I0924 19:46:32.807736   69576 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.key.6934b726
	I0924 19:46:32.807799   69576 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.key
	I0924 19:46:32.807950   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:46:32.807997   69576 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:46:32.808011   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:46:32.808045   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:46:32.808076   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:46:32.808111   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:46:32.808168   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:32.809039   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:46:32.866086   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:46:32.892458   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:46:32.925601   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:46:32.956936   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 19:46:32.979570   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 19:46:33.001159   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:46:33.022216   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 19:46:33.044213   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:46:33.065352   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:46:33.086229   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:46:33.107040   69576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:46:33.122285   69576 ssh_runner.go:195] Run: openssl version
	I0924 19:46:33.127664   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:46:33.137277   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.141239   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.141289   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.146498   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:46:33.156352   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:46:33.166235   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.170189   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.170233   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.175345   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:46:33.185095   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:46:33.194846   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.199024   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.199084   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.204244   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:46:33.214142   69576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:46:33.218178   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:46:33.223659   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:46:33.228914   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:46:33.234183   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:46:33.239611   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:46:33.244844   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:46:33.250012   69576 kubeadm.go:392] StartCluster: {Name:no-preload-965745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:46:33.250094   69576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:46:33.250128   69576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:33.282919   69576 cri.go:89] found id: ""
	I0924 19:46:33.282980   69576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:46:33.292578   69576 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:46:33.292605   69576 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:46:33.292665   69576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:46:33.301695   69576 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:46:33.303477   69576 kubeconfig.go:125] found "no-preload-965745" server: "https://192.168.39.134:8443"
	I0924 19:46:33.306052   69576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:46:33.314805   69576 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.134
	I0924 19:46:33.314843   69576 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:46:33.314857   69576 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:46:33.314907   69576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:33.346457   69576 cri.go:89] found id: ""
	I0924 19:46:33.346523   69576 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:46:33.361257   69576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:46:33.370192   69576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:46:33.370209   69576 kubeadm.go:157] found existing configuration files:
	
	I0924 19:46:33.370246   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:46:33.378693   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:46:33.378735   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:46:33.387379   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:46:33.395516   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:46:33.395555   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:46:33.404216   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:46:33.412518   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:46:33.412564   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:46:33.421332   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:46:33.430004   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:46:33.430067   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:46:33.438769   69576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:46:33.447918   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:33.547090   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.162139   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.345688   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.400915   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.479925   69576 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:46:34.480005   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:34.980773   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:35.480568   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:35.515707   69576 api_server.go:72] duration metric: took 1.035779291s to wait for apiserver process to appear ...
	I0924 19:46:35.515736   69576 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:46:35.515759   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:37.357928   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:37.361222   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:37.361720   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:37.361763   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:37.362089   69904 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:37.366395   69904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:37.383334   69904 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-093771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:37.383451   69904 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:46:37.383503   69904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:37.425454   69904 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:46:37.425528   69904 ssh_runner.go:195] Run: which lz4
	I0924 19:46:37.430589   69904 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:46:37.435668   69904 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:46:37.435702   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 19:46:38.688183   69904 crio.go:462] duration metric: took 1.257629121s to copy over tarball
	I0924 19:46:38.688265   69904 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:46:38.577925   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:38.577956   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:38.577971   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:38.617929   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:38.617970   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:39.015942   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:39.024069   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:39.024108   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:39.516830   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:39.522389   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:39.522423   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:40.015905   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:40.024316   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:40.024344   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:40.515871   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:40.524708   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0924 19:46:40.533300   69576 api_server.go:141] control plane version: v1.31.1
	I0924 19:46:40.533330   69576 api_server.go:131] duration metric: took 5.017586868s to wait for apiserver health ...
	I0924 19:46:40.533341   69576 cni.go:84] Creating CNI manager for ""
	I0924 19:46:40.533350   69576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:40.535207   69576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:46:37.184620   70152 main.go:141] libmachine: (old-k8s-version-510301) Waiting to get IP...
	I0924 19:46:37.185660   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.186074   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.186151   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.186052   71118 retry.go:31] will retry after 294.949392ms: waiting for machine to come up
	I0924 19:46:37.482814   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.483327   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.483356   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.483268   71118 retry.go:31] will retry after 344.498534ms: waiting for machine to come up
	I0924 19:46:37.830045   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.830715   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.830748   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.830647   71118 retry.go:31] will retry after 342.025563ms: waiting for machine to come up
	I0924 19:46:38.174408   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:38.176008   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:38.176040   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:38.175906   71118 retry.go:31] will retry after 456.814011ms: waiting for machine to come up
	I0924 19:46:38.634792   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:38.635533   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:38.635566   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:38.635443   71118 retry.go:31] will retry after 582.88697ms: waiting for machine to come up
	I0924 19:46:39.220373   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:39.220869   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:39.220899   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:39.220811   71118 retry.go:31] will retry after 648.981338ms: waiting for machine to come up
	I0924 19:46:39.872016   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:39.872615   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:39.872645   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:39.872571   71118 retry.go:31] will retry after 1.138842254s: waiting for machine to come up
	I0924 19:46:41.012974   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:41.013539   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:41.013575   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:41.013489   71118 retry.go:31] will retry after 996.193977ms: waiting for machine to come up
	I0924 19:46:40.536733   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:46:40.547944   69576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:46:40.577608   69576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:46:40.595845   69576 system_pods.go:59] 8 kube-system pods found
	I0924 19:46:40.595910   69576 system_pods.go:61] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:46:40.595922   69576 system_pods.go:61] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:46:40.595934   69576 system_pods.go:61] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:46:40.595947   69576 system_pods.go:61] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:46:40.595957   69576 system_pods.go:61] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 19:46:40.595967   69576 system_pods.go:61] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:46:40.595980   69576 system_pods.go:61] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:46:40.595986   69576 system_pods.go:61] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:46:40.595995   69576 system_pods.go:74] duration metric: took 18.365618ms to wait for pod list to return data ...
	I0924 19:46:40.596006   69576 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:46:40.599781   69576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:46:40.599809   69576 node_conditions.go:123] node cpu capacity is 2
	I0924 19:46:40.599822   69576 node_conditions.go:105] duration metric: took 3.810089ms to run NodePressure ...
	I0924 19:46:40.599842   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:40.916081   69576 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:46:40.921516   69576 kubeadm.go:739] kubelet initialised
	I0924 19:46:40.921545   69576 kubeadm.go:740] duration metric: took 5.434388ms waiting for restarted kubelet to initialise ...
	I0924 19:46:40.921569   69576 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:40.926954   69576 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.931807   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.931825   69576 pod_ready.go:82] duration metric: took 4.85217ms for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.931833   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.931840   69576 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.936614   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "etcd-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.936636   69576 pod_ready.go:82] duration metric: took 4.788888ms for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.936646   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "etcd-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.936654   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.941669   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-apiserver-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.941684   69576 pod_ready.go:82] duration metric: took 5.022921ms for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.941691   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-apiserver-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.941697   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.981457   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.981487   69576 pod_ready.go:82] duration metric: took 39.779589ms for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.981500   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.981512   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:41.381145   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-proxy-ng8vf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.381172   69576 pod_ready.go:82] duration metric: took 399.651445ms for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:41.381183   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-proxy-ng8vf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.381191   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:41.780780   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-scheduler-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.780802   69576 pod_ready.go:82] duration metric: took 399.60413ms for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:41.780811   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-scheduler-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.780818   69576 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:42.181235   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:42.181264   69576 pod_ready.go:82] duration metric: took 400.43573ms for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:42.181278   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:42.181287   69576 pod_ready.go:39] duration metric: took 1.259692411s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:42.181306   69576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:46:42.192253   69576 ops.go:34] apiserver oom_adj: -16
	I0924 19:46:42.192274   69576 kubeadm.go:597] duration metric: took 8.899661487s to restartPrimaryControlPlane
	I0924 19:46:42.192285   69576 kubeadm.go:394] duration metric: took 8.942279683s to StartCluster
	I0924 19:46:42.192302   69576 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:42.192388   69576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:46:42.194586   69576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:42.194926   69576 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:46:42.195047   69576 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:46:42.195118   69576 addons.go:69] Setting storage-provisioner=true in profile "no-preload-965745"
	I0924 19:46:42.195137   69576 addons.go:234] Setting addon storage-provisioner=true in "no-preload-965745"
	W0924 19:46:42.195145   69576 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:46:42.195150   69576 addons.go:69] Setting default-storageclass=true in profile "no-preload-965745"
	I0924 19:46:42.195167   69576 addons.go:69] Setting metrics-server=true in profile "no-preload-965745"
	I0924 19:46:42.195174   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.195177   69576 config.go:182] Loaded profile config "no-preload-965745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:42.195182   69576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-965745"
	I0924 19:46:42.195185   69576 addons.go:234] Setting addon metrics-server=true in "no-preload-965745"
	W0924 19:46:42.195194   69576 addons.go:243] addon metrics-server should already be in state true
	I0924 19:46:42.195219   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.195593   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195609   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195629   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195643   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.195658   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.195736   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.196723   69576 out.go:177] * Verifying Kubernetes components...
	I0924 19:46:42.198152   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:42.212617   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0924 19:46:42.213165   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.213669   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.213695   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.214078   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.214268   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.216100   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45549
	I0924 19:46:42.216467   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.216915   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.216934   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.217274   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.217317   69576 addons.go:234] Setting addon default-storageclass=true in "no-preload-965745"
	W0924 19:46:42.217329   69576 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:46:42.217357   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.217629   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.217666   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.217870   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.217915   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.236569   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I0924 19:46:42.236995   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.236999   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I0924 19:46:42.237477   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.237606   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.237630   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.237989   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.238081   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.238103   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.238605   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.238645   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.238851   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.239570   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.239624   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.243303   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0924 19:46:42.243749   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.244205   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.244225   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.244541   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.244860   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.246518   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.248349   69576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:42.249690   69576 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:46:42.249706   69576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:46:42.249724   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.256169   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0924 19:46:42.256413   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.256626   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.256648   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.256801   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.256952   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.257080   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.257136   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.257247   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.257656   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.257671   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.257975   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.258190   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.259449   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34329
	I0924 19:46:42.259667   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.260521   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.260996   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.261009   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.261374   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.261457   69576 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:46:42.261544   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.262754   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:46:42.262769   69576 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:46:42.262787   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.263351   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.263661   69576 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:46:42.263677   69576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:46:42.263691   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.266205   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.266653   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.266672   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.266974   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.267122   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.267234   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.267342   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.267589   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.267935   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.267951   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.268213   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.268331   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.268417   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.268562   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.408715   69576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:42.425635   69576 node_ready.go:35] waiting up to 6m0s for node "no-preload-965745" to be "Ready" ...
	I0924 19:46:40.944536   69904 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256242572s)
	I0924 19:46:40.944565   69904 crio.go:469] duration metric: took 2.25635162s to extract the tarball
	I0924 19:46:40.944574   69904 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:46:40.981609   69904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:41.019006   69904 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 19:46:41.019026   69904 cache_images.go:84] Images are preloaded, skipping loading
	I0924 19:46:41.019035   69904 kubeadm.go:934] updating node { 192.168.50.116 8444 v1.31.1 crio true true} ...
	I0924 19:46:41.019146   69904 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-093771 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:46:41.019233   69904 ssh_runner.go:195] Run: crio config
	I0924 19:46:41.064904   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:46:41.064927   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:41.064938   69904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:46:41.064957   69904 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.116 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-093771 NodeName:default-k8s-diff-port-093771 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:46:41.065089   69904 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.116
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-093771"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:46:41.065142   69904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:46:41.075518   69904 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:46:41.075604   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:46:41.084461   69904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0924 19:46:41.099383   69904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:46:41.114093   69904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0924 19:46:41.129287   69904 ssh_runner.go:195] Run: grep 192.168.50.116	control-plane.minikube.internal$ /etc/hosts
	I0924 19:46:41.132690   69904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:41.144620   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:41.258218   69904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:41.279350   69904 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771 for IP: 192.168.50.116
	I0924 19:46:41.279373   69904 certs.go:194] generating shared ca certs ...
	I0924 19:46:41.279393   69904 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:41.279592   69904 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:46:41.279668   69904 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:46:41.279685   69904 certs.go:256] generating profile certs ...
	I0924 19:46:41.279806   69904 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/client.key
	I0924 19:46:41.279905   69904 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.key.ee3880b0
	I0924 19:46:41.279968   69904 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.key
	I0924 19:46:41.280139   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:46:41.280176   69904 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:46:41.280189   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:46:41.280248   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:46:41.280292   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:46:41.280324   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:46:41.280379   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:41.281191   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:46:41.319225   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:46:41.343585   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:46:41.373080   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:46:41.405007   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0924 19:46:41.434543   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 19:46:41.458642   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:46:41.480848   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:46:41.502778   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:46:41.525217   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:46:41.548290   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:46:41.572569   69904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:46:41.591631   69904 ssh_runner.go:195] Run: openssl version
	I0924 19:46:41.598407   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:46:41.611310   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.616372   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.616425   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.621818   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:46:41.631262   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:46:41.641685   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.645781   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.645827   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.651168   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:46:41.664296   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:46:41.677001   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.681609   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.681650   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.686733   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:46:41.696235   69904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:46:41.700431   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:46:41.705979   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:46:41.711363   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:46:41.716911   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:46:41.722137   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:46:41.727363   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:46:41.732646   69904 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-093771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:46:41.732750   69904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:46:41.732791   69904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:41.766796   69904 cri.go:89] found id: ""
	I0924 19:46:41.766883   69904 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:46:41.776244   69904 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:46:41.776268   69904 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:46:41.776316   69904 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:46:41.786769   69904 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:46:41.787665   69904 kubeconfig.go:125] found "default-k8s-diff-port-093771" server: "https://192.168.50.116:8444"
	I0924 19:46:41.789591   69904 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:46:41.798561   69904 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.116
	I0924 19:46:41.798596   69904 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:46:41.798617   69904 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:46:41.798661   69904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:41.839392   69904 cri.go:89] found id: ""
	I0924 19:46:41.839469   69904 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:46:41.854464   69904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:46:41.863006   69904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:46:41.863023   69904 kubeadm.go:157] found existing configuration files:
	
	I0924 19:46:41.863082   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 19:46:41.871086   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:46:41.871138   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:46:41.880003   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 19:46:41.890123   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:46:41.890171   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:46:41.901736   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 19:46:41.909613   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:46:41.909670   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:46:41.921595   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 19:46:41.932589   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:46:41.932654   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:46:41.943735   69904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:46:41.952064   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:42.065934   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:42.948388   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.183687   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.264336   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.353897   69904 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:46:43.353979   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:43.854330   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:42.514864   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:46:42.533161   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:46:42.533181   69576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:46:42.539876   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:46:42.564401   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:46:42.564427   69576 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:46:42.598218   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:46:42.598243   69576 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:46:42.619014   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:46:44.487219   69576 node_ready.go:53] node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:45.026145   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.511239735s)
	I0924 19:46:45.026401   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.026416   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.026281   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.486373933s)
	I0924 19:46:45.026501   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.026514   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030099   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030118   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030151   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030162   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030166   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030171   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.030175   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030179   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030184   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.030192   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030494   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030544   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030562   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030634   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030662   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.041980   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.042007   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.042336   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.042391   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.042424   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.120637   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.501525022s)
	I0924 19:46:45.120699   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.120714   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.121114   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.121173   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.121197   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.121222   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.122653   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.122671   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.122683   69576 addons.go:475] Verifying addon metrics-server=true in "no-preload-965745"
	I0924 19:46:45.124698   69576 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 19:46:42.011562   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:42.011963   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:42.011986   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:42.011932   71118 retry.go:31] will retry after 1.827996528s: waiting for machine to come up
	I0924 19:46:43.841529   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:43.842075   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:43.842106   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:43.842030   71118 retry.go:31] will retry after 2.224896366s: waiting for machine to come up
	I0924 19:46:46.068290   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:46.068761   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:46.068784   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:46.068736   71118 retry.go:31] will retry after 2.630690322s: waiting for machine to come up
	I0924 19:46:45.126030   69576 addons.go:510] duration metric: took 2.930987175s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 19:46:46.930203   69576 node_ready.go:53] node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:44.354690   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:44.854316   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:45.354861   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:45.370596   69904 api_server.go:72] duration metric: took 2.016695722s to wait for apiserver process to appear ...
	I0924 19:46:45.370626   69904 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:46:45.370655   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:45.371182   69904 api_server.go:269] stopped: https://192.168.50.116:8444/healthz: Get "https://192.168.50.116:8444/healthz": dial tcp 192.168.50.116:8444: connect: connection refused
	I0924 19:46:45.870725   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.042928   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:48.042957   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:48.042985   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.054732   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:48.054759   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:48.371230   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.381025   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:48.381058   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:48.871669   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.878407   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:48.878440   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:49.371018   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:49.375917   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 200:
	ok
	I0924 19:46:49.383318   69904 api_server.go:141] control plane version: v1.31.1
	I0924 19:46:49.383352   69904 api_server.go:131] duration metric: took 4.012718503s to wait for apiserver health ...
	I0924 19:46:49.383362   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:46:49.383368   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:49.385326   69904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:46:48.700927   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:48.701338   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:48.701367   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:48.701291   71118 retry.go:31] will retry after 3.546152526s: waiting for machine to come up
	I0924 19:46:48.934204   69576 node_ready.go:49] node "no-preload-965745" has status "Ready":"True"
	I0924 19:46:48.934238   69576 node_ready.go:38] duration metric: took 6.508559983s for node "no-preload-965745" to be "Ready" ...
	I0924 19:46:48.934250   69576 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:48.941949   69576 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:48.947063   69576 pod_ready.go:93] pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:48.947094   69576 pod_ready.go:82] duration metric: took 5.112983ms for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:48.947106   69576 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.953349   69576 pod_ready.go:103] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:53.519204   69408 start.go:364] duration metric: took 49.813943111s to acquireMachinesLock for "embed-certs-311319"
	I0924 19:46:53.519255   69408 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:53.519264   69408 fix.go:54] fixHost starting: 
	I0924 19:46:53.519644   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:53.519688   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:53.536327   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0924 19:46:53.536874   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:53.537424   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:46:53.537449   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:53.537804   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:53.538009   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:46:53.538172   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:46:53.539842   69408 fix.go:112] recreateIfNeeded on embed-certs-311319: state=Stopped err=<nil>
	I0924 19:46:53.539866   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	W0924 19:46:53.540003   69408 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:53.541719   69408 out.go:177] * Restarting existing kvm2 VM for "embed-certs-311319" ...
	I0924 19:46:49.386740   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:46:49.398816   69904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:46:49.416805   69904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:46:49.428112   69904 system_pods.go:59] 8 kube-system pods found
	I0924 19:46:49.428153   69904 system_pods.go:61] "coredns-7c65d6cfc9-h4nm8" [621c3ebb-1eb3-47a4-ba87-68e9caa2f3f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:46:49.428175   69904 system_pods.go:61] "etcd-default-k8s-diff-port-093771" [4251f310-2a54-4473-91ba-0aa57247a8e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:46:49.428196   69904 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-093771" [13840d0f-dca8-4b9e-876f-e664bd2ec6e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:46:49.428210   69904 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-093771" [30bbbd4d-8609-47fd-9a9f-373a5b63d785] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:46:49.428220   69904 system_pods.go:61] "kube-proxy-4gx4g" [de627472-1155-4ce3-b910-15657e93988e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 19:46:49.428232   69904 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-093771" [b1edae56-d98a-4fc8-8a99-c6e27f485c91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:46:49.428244   69904 system_pods.go:61] "metrics-server-6867b74b74-rgcll" [11de5d03-9c99-4536-9cfd-b33fe2e11fae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:46:49.428256   69904 system_pods.go:61] "storage-provisioner" [3c29f75e-1570-42cd-8430-284527878197] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 19:46:49.428269   69904 system_pods.go:74] duration metric: took 11.441258ms to wait for pod list to return data ...
	I0924 19:46:49.428288   69904 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:46:49.432173   69904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:46:49.432198   69904 node_conditions.go:123] node cpu capacity is 2
	I0924 19:46:49.432207   69904 node_conditions.go:105] duration metric: took 3.913746ms to run NodePressure ...
	I0924 19:46:49.432221   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:49.707599   69904 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:46:49.712788   69904 kubeadm.go:739] kubelet initialised
	I0924 19:46:49.712808   69904 kubeadm.go:740] duration metric: took 5.18017ms waiting for restarted kubelet to initialise ...
	I0924 19:46:49.712816   69904 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:49.725245   69904 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.731600   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.731624   69904 pod_ready.go:82] duration metric: took 6.354998ms for pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.731633   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.731639   69904 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.737044   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.737067   69904 pod_ready.go:82] duration metric: took 5.419976ms for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.737083   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.737092   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.742151   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.742170   69904 pod_ready.go:82] duration metric: took 5.067452ms for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.742180   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.742185   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.823203   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.823237   69904 pod_ready.go:82] duration metric: took 81.044673ms for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.823253   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.823262   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4gx4g" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.220171   69904 pod_ready.go:93] pod "kube-proxy-4gx4g" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:50.220207   69904 pod_ready.go:82] duration metric: took 396.929531ms for pod "kube-proxy-4gx4g" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.220219   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:52.227683   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:52.249370   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.249921   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has current primary IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.249953   70152 main.go:141] libmachine: (old-k8s-version-510301) Found IP for machine: 192.168.72.81
	I0924 19:46:52.249967   70152 main.go:141] libmachine: (old-k8s-version-510301) Reserving static IP address...
	I0924 19:46:52.250395   70152 main.go:141] libmachine: (old-k8s-version-510301) Reserved static IP address: 192.168.72.81
	I0924 19:46:52.250438   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "old-k8s-version-510301", mac: "52:54:00:72:11:f0", ip: "192.168.72.81"} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.250453   70152 main.go:141] libmachine: (old-k8s-version-510301) Waiting for SSH to be available...
	I0924 19:46:52.250479   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | skip adding static IP to network mk-old-k8s-version-510301 - found existing host DHCP lease matching {name: "old-k8s-version-510301", mac: "52:54:00:72:11:f0", ip: "192.168.72.81"}
	I0924 19:46:52.250492   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Getting to WaitForSSH function...
	I0924 19:46:52.252807   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.253148   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.253176   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.253278   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using SSH client type: external
	I0924 19:46:52.253300   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa (-rw-------)
	I0924 19:46:52.253332   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:52.253345   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | About to run SSH command:
	I0924 19:46:52.253354   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | exit 0
	I0924 19:46:52.378625   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:52.379067   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetConfigRaw
	I0924 19:46:52.379793   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:52.382222   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.382618   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.382647   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.382925   70152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json ...
	I0924 19:46:52.383148   70152 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:52.383174   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:52.383374   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.385984   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.386434   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.386460   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.386614   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.386788   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.387002   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.387167   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.387396   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.387632   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.387645   70152 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:52.503003   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:52.503033   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.503320   70152 buildroot.go:166] provisioning hostname "old-k8s-version-510301"
	I0924 19:46:52.503344   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.503630   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.506502   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.506817   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.506858   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.507028   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.507216   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.507394   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.507584   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.507792   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.508016   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.508034   70152 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-510301 && echo "old-k8s-version-510301" | sudo tee /etc/hostname
	I0924 19:46:52.634014   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-510301
	
	I0924 19:46:52.634040   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.636807   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.637156   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.637186   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.637331   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.637528   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.637721   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.637866   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.638016   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.638228   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.638252   70152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-510301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-510301/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-510301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:52.754583   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:52.754613   70152 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:52.754645   70152 buildroot.go:174] setting up certificates
	I0924 19:46:52.754653   70152 provision.go:84] configureAuth start
	I0924 19:46:52.754664   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.754975   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:52.757674   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.758024   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.758047   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.758158   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.760405   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.760722   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.760751   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.760869   70152 provision.go:143] copyHostCerts
	I0924 19:46:52.760928   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:52.760942   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:52.761009   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:52.761125   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:52.761141   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:52.761180   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:52.761262   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:52.761274   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:52.761301   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:52.761375   70152 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-510301 san=[127.0.0.1 192.168.72.81 localhost minikube old-k8s-version-510301]
	I0924 19:46:52.906522   70152 provision.go:177] copyRemoteCerts
	I0924 19:46:52.906586   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:52.906606   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.909264   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.909580   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.909622   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.909777   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.909960   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.910206   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.910313   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:52.997129   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:53.020405   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 19:46:53.042194   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:53.063422   70152 provision.go:87] duration metric: took 308.753857ms to configureAuth
	I0924 19:46:53.063448   70152 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:53.063662   70152 config.go:182] Loaded profile config "old-k8s-version-510301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:46:53.063752   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.066435   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.066850   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.066877   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.067076   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.067247   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.067382   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.067546   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.067749   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:53.067935   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:53.067958   70152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:53.288436   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:53.288463   70152 machine.go:96] duration metric: took 905.298763ms to provisionDockerMachine
	I0924 19:46:53.288476   70152 start.go:293] postStartSetup for "old-k8s-version-510301" (driver="kvm2")
	I0924 19:46:53.288486   70152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:53.288513   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.288841   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:53.288869   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.291363   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.291643   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.291660   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.291867   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.292054   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.292210   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.292337   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.372984   70152 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:53.377049   70152 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:53.377072   70152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:53.377158   70152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:53.377250   70152 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:53.377339   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:53.385950   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:53.408609   70152 start.go:296] duration metric: took 120.112789ms for postStartSetup
	I0924 19:46:53.408654   70152 fix.go:56] duration metric: took 17.584988201s for fixHost
	I0924 19:46:53.408677   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.411723   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.412100   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.412124   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.412309   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.412544   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.412752   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.412892   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.413075   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:53.413260   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:53.413272   70152 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:53.519060   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207213.488062061
	
	I0924 19:46:53.519081   70152 fix.go:216] guest clock: 1727207213.488062061
	I0924 19:46:53.519090   70152 fix.go:229] Guest: 2024-09-24 19:46:53.488062061 +0000 UTC Remote: 2024-09-24 19:46:53.408658589 +0000 UTC m=+246.951196346 (delta=79.403472ms)
	I0924 19:46:53.519120   70152 fix.go:200] guest clock delta is within tolerance: 79.403472ms
	I0924 19:46:53.519127   70152 start.go:83] releasing machines lock for "old-k8s-version-510301", held for 17.695500754s
	I0924 19:46:53.519158   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.519439   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:53.522059   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.522454   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.522483   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.522639   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523144   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523344   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523432   70152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:53.523470   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.523577   70152 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:53.523614   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.526336   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.526804   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.526845   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.526874   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.527024   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.527216   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.527354   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.527358   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.527382   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.527484   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.527599   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.527742   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.527925   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.528073   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.625956   70152 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:53.631927   70152 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:53.769800   70152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:53.776028   70152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:53.776076   70152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:53.792442   70152 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:53.792476   70152 start.go:495] detecting cgroup driver to use...
	I0924 19:46:53.792558   70152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:53.813239   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:53.827951   70152 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:53.828011   70152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:53.840962   70152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:53.853498   70152 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:53.957380   70152 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:54.123019   70152 docker.go:233] disabling docker service ...
	I0924 19:46:54.123087   70152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:54.138033   70152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:54.153414   70152 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:54.286761   70152 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:54.411013   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:54.432184   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:54.449924   70152 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 19:46:54.450001   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.459689   70152 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:54.459745   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.469555   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.480875   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.490860   70152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:54.503933   70152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:54.513383   70152 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:54.513444   70152 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:54.527180   70152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:54.539778   70152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:54.676320   70152 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:54.774914   70152 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:54.775027   70152 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:54.780383   70152 start.go:563] Will wait 60s for crictl version
	I0924 19:46:54.780457   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:54.785066   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:54.825711   70152 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:54.825792   70152 ssh_runner.go:195] Run: crio --version
	I0924 19:46:54.861643   70152 ssh_runner.go:195] Run: crio --version
	I0924 19:46:54.905425   70152 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 19:46:53.542904   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Start
	I0924 19:46:53.543092   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring networks are active...
	I0924 19:46:53.543799   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring network default is active
	I0924 19:46:53.544155   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring network mk-embed-certs-311319 is active
	I0924 19:46:53.544586   69408 main.go:141] libmachine: (embed-certs-311319) Getting domain xml...
	I0924 19:46:53.545860   69408 main.go:141] libmachine: (embed-certs-311319) Creating domain...
	I0924 19:46:54.960285   69408 main.go:141] libmachine: (embed-certs-311319) Waiting to get IP...
	I0924 19:46:54.961237   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:54.961738   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:54.961831   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:54.961724   71297 retry.go:31] will retry after 193.067485ms: waiting for machine to come up
	I0924 19:46:55.156270   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:55.156850   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:55.156881   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:55.156806   71297 retry.go:31] will retry after 374.820173ms: waiting for machine to come up
	I0924 19:46:55.533606   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:55.534201   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:55.534235   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:55.534160   71297 retry.go:31] will retry after 469.993304ms: waiting for machine to come up
	I0924 19:46:56.005971   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:56.006513   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:56.006544   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:56.006471   71297 retry.go:31] will retry after 418.910837ms: waiting for machine to come up
	I0924 19:46:54.906585   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:54.909353   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:54.909736   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:54.909766   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:54.909970   70152 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:54.915290   70152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:54.927316   70152 kubeadm.go:883] updating cluster {Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:54.927427   70152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:46:54.927465   70152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:54.971020   70152 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:46:54.971090   70152 ssh_runner.go:195] Run: which lz4
	I0924 19:46:54.975775   70152 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:46:54.979807   70152 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:46:54.979865   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 19:46:56.372682   70152 crio.go:462] duration metric: took 1.396951861s to copy over tarball
	I0924 19:46:56.372750   70152 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:46:53.453495   69576 pod_ready.go:103] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:53.954341   69576 pod_ready.go:93] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.954366   69576 pod_ready.go:82] duration metric: took 5.007252183s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.954375   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.959461   69576 pod_ready.go:93] pod "kube-apiserver-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.959485   69576 pod_ready.go:82] duration metric: took 5.103045ms for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.959498   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.964289   69576 pod_ready.go:93] pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.964316   69576 pod_ready.go:82] duration metric: took 4.809404ms for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.964329   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.968263   69576 pod_ready.go:93] pod "kube-proxy-ng8vf" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.968286   69576 pod_ready.go:82] duration metric: took 3.947497ms for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.968296   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.971899   69576 pod_ready.go:93] pod "kube-scheduler-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.971916   69576 pod_ready.go:82] duration metric: took 3.613023ms for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.971924   69576 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:55.980226   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:54.728787   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:57.226216   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:59.227939   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:56.427214   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:56.427600   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:56.427638   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:56.427551   71297 retry.go:31] will retry after 631.22309ms: waiting for machine to come up
	I0924 19:46:57.059888   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:57.060269   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:57.060299   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:57.060219   71297 retry.go:31] will retry after 833.784855ms: waiting for machine to come up
	I0924 19:46:57.895228   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:57.895693   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:57.895711   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:57.895641   71297 retry.go:31] will retry after 1.12615573s: waiting for machine to come up
	I0924 19:46:59.023342   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:59.023824   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:59.023853   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:59.023770   71297 retry.go:31] will retry after 1.020351559s: waiting for machine to come up
	I0924 19:47:00.045373   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:00.045833   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:00.045860   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:00.045779   71297 retry.go:31] will retry after 1.127245815s: waiting for machine to come up
	I0924 19:46:59.298055   70152 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.925272101s)
	I0924 19:46:59.298082   70152 crio.go:469] duration metric: took 2.925375511s to extract the tarball
	I0924 19:46:59.298091   70152 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:46:59.340896   70152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:59.374335   70152 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:46:59.374358   70152 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 19:46:59.374431   70152 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:59.374463   70152 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.374468   70152 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.374489   70152 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.374514   70152 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.374434   70152 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.374582   70152 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.374624   70152 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 19:46:59.375796   70152 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.375857   70152 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.375925   70152 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.375869   70152 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.376062   70152 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.376154   70152 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:59.376357   70152 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.376419   70152 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 19:46:59.521289   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.525037   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.526549   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.536791   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.545312   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.553847   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 19:46:59.558387   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.611119   70152 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 19:46:59.611167   70152 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.611219   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.659190   70152 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 19:46:59.659234   70152 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.659282   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.660489   70152 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 19:46:59.660522   70152 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 19:46:59.660529   70152 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.660558   70152 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.660591   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.660596   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.686686   70152 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 19:46:59.686728   70152 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.686777   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698274   70152 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 19:46:59.698313   70152 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 19:46:59.698366   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698379   70152 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 19:46:59.698410   70152 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.698449   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698451   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.698462   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.698523   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.698527   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.698573   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.795169   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.795179   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.795201   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.805639   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.817474   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.817485   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.817538   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:46:59.917772   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.921025   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.929651   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.955330   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.955344   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.969966   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:46:59.969966   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:47:00.058059   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 19:47:00.058134   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 19:47:00.058178   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 19:47:00.078489   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 19:47:00.078543   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 19:47:00.091137   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:47:00.091212   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:47:00.132385   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 19:47:00.140154   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 19:47:00.328511   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:47:00.468550   70152 cache_images.go:92] duration metric: took 1.094174976s to LoadCachedImages
	W0924 19:47:00.468674   70152 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0924 19:47:00.468693   70152 kubeadm.go:934] updating node { 192.168.72.81 8443 v1.20.0 crio true true} ...
	I0924 19:47:00.468831   70152 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-510301 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:47:00.468918   70152 ssh_runner.go:195] Run: crio config
	I0924 19:47:00.521799   70152 cni.go:84] Creating CNI manager for ""
	I0924 19:47:00.521826   70152 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:00.521836   70152 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:47:00.521858   70152 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-510301 NodeName:old-k8s-version-510301 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 19:47:00.521992   70152 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-510301"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:47:00.522051   70152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 19:47:00.534799   70152 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:47:00.534888   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:47:00.546863   70152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0924 19:47:00.565623   70152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:47:00.583242   70152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0924 19:47:00.600113   70152 ssh_runner.go:195] Run: grep 192.168.72.81	control-plane.minikube.internal$ /etc/hosts
	I0924 19:47:00.603653   70152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:00.618699   70152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:00.746348   70152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:47:00.767201   70152 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301 for IP: 192.168.72.81
	I0924 19:47:00.767228   70152 certs.go:194] generating shared ca certs ...
	I0924 19:47:00.767246   70152 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:00.767418   70152 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:47:00.767468   70152 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:47:00.767482   70152 certs.go:256] generating profile certs ...
	I0924 19:47:00.767607   70152 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/client.key
	I0924 19:47:00.767675   70152 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key.32de9897
	I0924 19:47:00.767726   70152 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key
	I0924 19:47:00.767866   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:47:00.767903   70152 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:47:00.767916   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:47:00.767950   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:47:00.767980   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:47:00.768013   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:47:00.768064   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:00.768651   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:47:00.819295   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:47:00.858368   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:47:00.903694   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:47:00.930441   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 19:47:00.960346   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 19:47:00.988938   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:47:01.014165   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:47:01.038384   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:47:01.061430   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:47:01.083761   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:47:01.105996   70152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:47:01.121529   70152 ssh_runner.go:195] Run: openssl version
	I0924 19:47:01.127294   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:47:01.139547   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.143897   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.143956   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.149555   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:47:01.159823   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:47:01.170730   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.175500   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.175635   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.181445   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:47:01.194810   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:47:01.205193   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.209256   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.209316   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.214946   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:47:01.225368   70152 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:47:01.229833   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:47:01.235652   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:47:01.241158   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:47:01.248213   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:47:01.255001   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:47:01.262990   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:47:01.270069   70152 kubeadm.go:392] StartCluster: {Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:47:01.270166   70152 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:47:01.270211   70152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:01.310648   70152 cri.go:89] found id: ""
	I0924 19:47:01.310759   70152 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:47:01.321111   70152 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:47:01.321133   70152 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:47:01.321182   70152 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:47:01.330754   70152 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:47:01.331880   70152 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-510301" does not appear in /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:47:01.332435   70152 kubeconfig.go:62] /home/jenkins/minikube-integration/19700-3751/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-510301" cluster setting kubeconfig missing "old-k8s-version-510301" context setting]
	I0924 19:47:01.333336   70152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:01.390049   70152 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:47:01.402246   70152 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.81
	I0924 19:47:01.402281   70152 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:47:01.402295   70152 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:47:01.402346   70152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:01.443778   70152 cri.go:89] found id: ""
	I0924 19:47:01.443851   70152 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:47:01.459836   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:47:01.469392   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:47:01.469414   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:47:01.469454   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:47:01.480329   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:47:01.480402   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:47:01.489799   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:46:58.478282   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:00.478523   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:02.478757   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:01.400039   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:02.984025   69904 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:02.984060   69904 pod_ready.go:82] duration metric: took 12.763830222s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:02.984074   69904 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:01.175244   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:01.175766   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:01.175794   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:01.175728   71297 retry.go:31] will retry after 2.109444702s: waiting for machine to come up
	I0924 19:47:03.288172   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:03.288747   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:03.288815   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:03.288726   71297 retry.go:31] will retry after 1.856538316s: waiting for machine to come up
	I0924 19:47:05.147261   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:05.147676   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:05.147705   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:05.147631   71297 retry.go:31] will retry after 3.46026185s: waiting for machine to come up
	I0924 19:47:01.499967   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:47:01.500023   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:47:01.508842   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:47:01.517564   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:47:01.517620   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:47:01.527204   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:47:01.536656   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:47:01.536718   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:47:01.546282   70152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:47:01.555548   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:01.755130   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.379331   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.601177   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.739476   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.829258   70152 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:47:02.829347   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:03.330254   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:03.830452   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.329738   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.829469   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:05.329754   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:05.830117   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:06.329834   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.978616   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:07.478201   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:04.990988   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:07.489888   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:08.610127   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:08.610582   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:08.610609   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:08.610530   71297 retry.go:31] will retry after 3.91954304s: waiting for machine to come up
	I0924 19:47:06.830043   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:07.330209   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:07.830432   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:08.329603   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:08.829525   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.330455   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.830130   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:10.329475   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:10.829474   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:11.330269   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.977113   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:11.977305   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:09.490038   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:11.490626   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:13.990603   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:12.534647   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.535213   69408 main.go:141] libmachine: (embed-certs-311319) Found IP for machine: 192.168.61.21
	I0924 19:47:12.535249   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has current primary IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.535259   69408 main.go:141] libmachine: (embed-certs-311319) Reserving static IP address...
	I0924 19:47:12.535700   69408 main.go:141] libmachine: (embed-certs-311319) Reserved static IP address: 192.168.61.21
	I0924 19:47:12.535744   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "embed-certs-311319", mac: "52:54:00:2d:97:73", ip: "192.168.61.21"} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.535759   69408 main.go:141] libmachine: (embed-certs-311319) Waiting for SSH to be available...
	I0924 19:47:12.535820   69408 main.go:141] libmachine: (embed-certs-311319) DBG | skip adding static IP to network mk-embed-certs-311319 - found existing host DHCP lease matching {name: "embed-certs-311319", mac: "52:54:00:2d:97:73", ip: "192.168.61.21"}
	I0924 19:47:12.535851   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Getting to WaitForSSH function...
	I0924 19:47:12.538011   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.538313   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.538336   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.538473   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Using SSH client type: external
	I0924 19:47:12.538500   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa (-rw-------)
	I0924 19:47:12.538538   69408 main.go:141] libmachine: (embed-certs-311319) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:47:12.538558   69408 main.go:141] libmachine: (embed-certs-311319) DBG | About to run SSH command:
	I0924 19:47:12.538634   69408 main.go:141] libmachine: (embed-certs-311319) DBG | exit 0
	I0924 19:47:12.662787   69408 main.go:141] libmachine: (embed-certs-311319) DBG | SSH cmd err, output: <nil>: 
	I0924 19:47:12.663130   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetConfigRaw
	I0924 19:47:12.663829   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:12.666266   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.666707   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.666734   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.666985   69408 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/config.json ...
	I0924 19:47:12.667187   69408 machine.go:93] provisionDockerMachine start ...
	I0924 19:47:12.667205   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:12.667397   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.669695   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.670024   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.670056   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.670152   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.670297   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.670460   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.670624   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.670793   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.671018   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.671033   69408 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:47:12.766763   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:47:12.766797   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.767074   69408 buildroot.go:166] provisioning hostname "embed-certs-311319"
	I0924 19:47:12.767103   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.767285   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.770003   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.770519   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.770538   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.770705   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.770934   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.771119   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.771237   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.771408   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.771554   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.771565   69408 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-311319 && echo "embed-certs-311319" | sudo tee /etc/hostname
	I0924 19:47:12.879608   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-311319
	
	I0924 19:47:12.879636   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.882136   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.882424   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.882467   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.882663   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.882866   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.883075   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.883235   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.883416   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.883583   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.883599   69408 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-311319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-311319/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-311319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:47:12.987554   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:47:12.987586   69408 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:47:12.987608   69408 buildroot.go:174] setting up certificates
	I0924 19:47:12.987618   69408 provision.go:84] configureAuth start
	I0924 19:47:12.987630   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.987918   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:12.990946   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.991378   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.991399   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.991554   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.993829   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.994193   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.994222   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.994349   69408 provision.go:143] copyHostCerts
	I0924 19:47:12.994410   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:47:12.994420   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:47:12.994478   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:47:12.994576   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:47:12.994586   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:47:12.994609   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:47:12.994663   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:47:12.994670   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:47:12.994689   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:47:12.994734   69408 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.embed-certs-311319 san=[127.0.0.1 192.168.61.21 embed-certs-311319 localhost minikube]
	I0924 19:47:13.255351   69408 provision.go:177] copyRemoteCerts
	I0924 19:47:13.255425   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:47:13.255452   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.257888   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.258200   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.258229   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.258359   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.258567   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.258746   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.258895   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.337835   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:47:13.360866   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 19:47:13.382703   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 19:47:13.404887   69408 provision.go:87] duration metric: took 417.256101ms to configureAuth
	I0924 19:47:13.404918   69408 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:47:13.405088   69408 config.go:182] Loaded profile config "embed-certs-311319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:47:13.405156   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.407711   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.408005   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.408024   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.408215   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.408408   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.408558   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.408660   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.408798   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:13.408960   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:13.408975   69408 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:47:13.623776   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:47:13.623798   69408 machine.go:96] duration metric: took 956.599003ms to provisionDockerMachine
	I0924 19:47:13.623809   69408 start.go:293] postStartSetup for "embed-certs-311319" (driver="kvm2")
	I0924 19:47:13.623818   69408 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:47:13.623833   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.624139   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:47:13.624168   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.627101   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.627443   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.627463   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.627613   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.627790   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.627941   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.628087   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.705595   69408 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:47:13.709401   69408 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:47:13.709432   69408 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:47:13.709507   69408 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:47:13.709597   69408 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:47:13.709717   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:47:13.718508   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:13.741537   69408 start.go:296] duration metric: took 117.71568ms for postStartSetup
	I0924 19:47:13.741586   69408 fix.go:56] duration metric: took 20.222309525s for fixHost
	I0924 19:47:13.741609   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.743935   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.744298   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.744319   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.744478   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.744665   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.744833   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.744950   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.745099   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:13.745299   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:13.745310   69408 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:47:13.847189   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207233.821269327
	
	I0924 19:47:13.847206   69408 fix.go:216] guest clock: 1727207233.821269327
	I0924 19:47:13.847213   69408 fix.go:229] Guest: 2024-09-24 19:47:13.821269327 +0000 UTC Remote: 2024-09-24 19:47:13.741591139 +0000 UTC m=+352.627485562 (delta=79.678188ms)
	I0924 19:47:13.847230   69408 fix.go:200] guest clock delta is within tolerance: 79.678188ms
	I0924 19:47:13.847236   69408 start.go:83] releasing machines lock for "embed-certs-311319", held for 20.328002727s
	I0924 19:47:13.847252   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.847550   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:13.850207   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.850597   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.850624   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.850777   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851225   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851382   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851459   69408 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:47:13.851520   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.851583   69408 ssh_runner.go:195] Run: cat /version.json
	I0924 19:47:13.851606   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.854077   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854214   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854354   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.854378   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854508   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.854615   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.854646   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854666   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.854852   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.854855   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.855020   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.855030   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.855168   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.855279   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.927108   69408 ssh_runner.go:195] Run: systemctl --version
	I0924 19:47:13.948600   69408 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:47:14.091427   69408 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:47:14.097911   69408 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:47:14.097970   69408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:47:14.113345   69408 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:47:14.113367   69408 start.go:495] detecting cgroup driver to use...
	I0924 19:47:14.113418   69408 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:47:14.129953   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:47:14.143732   69408 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:47:14.143792   69408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:47:14.156986   69408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:47:14.170235   69408 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:47:14.280973   69408 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:47:14.431584   69408 docker.go:233] disabling docker service ...
	I0924 19:47:14.431652   69408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:47:14.447042   69408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:47:14.458811   69408 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:47:14.571325   69408 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:47:14.685951   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:47:14.698947   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:47:14.716153   69408 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:47:14.716210   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.725659   69408 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:47:14.725711   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.734814   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.744087   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.753666   69408 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:47:14.763166   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.772502   69408 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.787890   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.797483   69408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:47:14.805769   69408 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:47:14.805822   69408 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:47:14.817290   69408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:47:14.827023   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:14.954141   69408 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:47:15.033256   69408 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:47:15.033336   69408 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:47:15.038070   69408 start.go:563] Will wait 60s for crictl version
	I0924 19:47:15.038118   69408 ssh_runner.go:195] Run: which crictl
	I0924 19:47:15.041588   69408 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:47:15.081812   69408 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:47:15.081922   69408 ssh_runner.go:195] Run: crio --version
	I0924 19:47:15.108570   69408 ssh_runner.go:195] Run: crio --version
	I0924 19:47:15.137432   69408 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:47:15.138786   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:15.141328   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:15.141693   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:15.141723   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:15.141867   69408 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0924 19:47:15.145512   69408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:15.156995   69408 kubeadm.go:883] updating cluster {Name:embed-certs-311319 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:47:15.157095   69408 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:47:15.157142   69408 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:47:15.189861   69408 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:47:15.189919   69408 ssh_runner.go:195] Run: which lz4
	I0924 19:47:15.193364   69408 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:47:15.196961   69408 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:47:15.196986   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 19:47:11.830448   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:12.330373   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:12.830050   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.329571   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.829489   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:14.329728   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:14.829674   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:15.329673   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:15.829570   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:16.330102   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.978164   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:15.978363   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:15.990970   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:18.491272   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:16.371583   69408 crio.go:462] duration metric: took 1.178253814s to copy over tarball
	I0924 19:47:16.371663   69408 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:47:18.358246   69408 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.986557839s)
	I0924 19:47:18.358276   69408 crio.go:469] duration metric: took 1.986666343s to extract the tarball
	I0924 19:47:18.358285   69408 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:47:18.393855   69408 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:47:18.442985   69408 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 19:47:18.443011   69408 cache_images.go:84] Images are preloaded, skipping loading
	I0924 19:47:18.443020   69408 kubeadm.go:934] updating node { 192.168.61.21 8443 v1.31.1 crio true true} ...
	I0924 19:47:18.443144   69408 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-311319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:47:18.443225   69408 ssh_runner.go:195] Run: crio config
	I0924 19:47:18.495010   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:47:18.495034   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:18.495045   69408 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:47:18.495071   69408 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.21 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-311319 NodeName:embed-certs-311319 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:47:18.495201   69408 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-311319"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:47:18.495259   69408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:47:18.504758   69408 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:47:18.504837   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:47:18.513817   69408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 19:47:18.529890   69408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:47:18.545915   69408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0924 19:47:18.561627   69408 ssh_runner.go:195] Run: grep 192.168.61.21	control-plane.minikube.internal$ /etc/hosts
	I0924 19:47:18.565041   69408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:18.576059   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:18.686482   69408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:47:18.703044   69408 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319 for IP: 192.168.61.21
	I0924 19:47:18.703074   69408 certs.go:194] generating shared ca certs ...
	I0924 19:47:18.703095   69408 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:18.703278   69408 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:47:18.703317   69408 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:47:18.703327   69408 certs.go:256] generating profile certs ...
	I0924 19:47:18.703417   69408 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/client.key
	I0924 19:47:18.703477   69408 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.key.8f14491f
	I0924 19:47:18.703510   69408 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.key
	I0924 19:47:18.703649   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:47:18.703703   69408 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:47:18.703715   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:47:18.703740   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:47:18.703771   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:47:18.703803   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:47:18.703843   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:18.704668   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:47:18.731187   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:47:18.762416   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:47:18.793841   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:47:18.822091   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0924 19:47:18.854506   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 19:47:18.880416   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:47:18.903863   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 19:47:18.926078   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:47:18.947455   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:47:18.968237   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:47:18.990346   69408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:47:19.006286   69408 ssh_runner.go:195] Run: openssl version
	I0924 19:47:19.011968   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:47:19.021631   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.025859   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.025914   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.030999   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:47:19.041265   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:47:19.050994   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.054763   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.054810   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.059873   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:47:19.069694   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:47:19.079194   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.083185   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.083236   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.088369   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:47:19.098719   69408 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:47:19.102935   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:47:19.108364   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:47:19.113724   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:47:19.119556   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:47:19.125014   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:47:19.130466   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:47:19.135718   69408 kubeadm.go:392] StartCluster: {Name:embed-certs-311319 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:47:19.135786   69408 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:47:19.135826   69408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:19.171585   69408 cri.go:89] found id: ""
	I0924 19:47:19.171664   69408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:47:19.181296   69408 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:47:19.181315   69408 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:47:19.181363   69408 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:47:19.191113   69408 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:47:19.192148   69408 kubeconfig.go:125] found "embed-certs-311319" server: "https://192.168.61.21:8443"
	I0924 19:47:19.194115   69408 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:47:19.203274   69408 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.21
	I0924 19:47:19.203308   69408 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:47:19.203319   69408 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:47:19.203372   69408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:19.249594   69408 cri.go:89] found id: ""
	I0924 19:47:19.249678   69408 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:47:19.268296   69408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:47:19.277151   69408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:47:19.277169   69408 kubeadm.go:157] found existing configuration files:
	
	I0924 19:47:19.277206   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:47:19.285488   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:47:19.285550   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:47:19.294995   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:47:19.303613   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:47:19.303669   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:47:19.312919   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:47:19.321717   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:47:19.321778   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:47:19.330321   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:47:19.342441   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:47:19.342497   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:47:19.352505   69408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:47:19.361457   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:19.463310   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.242073   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.431443   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.500079   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.575802   69408 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:47:20.575904   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:21.076353   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:16.829867   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.329440   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.830132   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:18.329512   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:18.829524   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:19.329716   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:19.829496   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:20.329702   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:20.830155   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:21.330292   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.979442   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:20.478202   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:22.478336   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:20.491568   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:22.991057   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:21.576940   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.076696   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.576235   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.594920   69408 api_server.go:72] duration metric: took 2.019101558s to wait for apiserver process to appear ...
	I0924 19:47:22.594944   69408 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:47:22.594965   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:22.595379   69408 api_server.go:269] stopped: https://192.168.61.21:8443/healthz: Get "https://192.168.61.21:8443/healthz": dial tcp 192.168.61.21:8443: connect: connection refused
	I0924 19:47:23.095005   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.467947   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:47:25.467974   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:47:25.467988   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.515819   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:47:25.515851   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:47:25.596001   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.602276   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:25.602314   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:26.095918   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:26.100666   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:26.100698   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:21.829987   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.329630   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.830041   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:23.330430   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:23.829696   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:24.329494   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:24.830212   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:25.330402   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:25.829827   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:26.329541   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:26.595784   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:26.601821   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:26.601861   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:27.095137   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:27.099164   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 200:
	ok
	I0924 19:47:27.106625   69408 api_server.go:141] control plane version: v1.31.1
	I0924 19:47:27.106652   69408 api_server.go:131] duration metric: took 4.511701512s to wait for apiserver health ...
	I0924 19:47:27.106661   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:47:27.106668   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:27.108430   69408 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:47:24.479088   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:26.978509   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:25.490325   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:27.990308   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:27.109830   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:47:27.119442   69408 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:47:27.139119   69408 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:47:27.150029   69408 system_pods.go:59] 8 kube-system pods found
	I0924 19:47:27.150060   69408 system_pods.go:61] "coredns-7c65d6cfc9-wwzps" [5d53dda1-bd41-40f4-8e01-e3808a6e17e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:47:27.150067   69408 system_pods.go:61] "etcd-embed-certs-311319" [899d3105-b565-4c9c-8b8e-fa524ba8bee8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:47:27.150076   69408 system_pods.go:61] "kube-apiserver-embed-certs-311319" [45909a95-dafd-436a-b1c9-4b16a7cb6ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:47:27.150083   69408 system_pods.go:61] "kube-controller-manager-embed-certs-311319" [e122c12d-8ad6-472d-9339-a9751a6108a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:47:27.150089   69408 system_pods.go:61] "kube-proxy-qk749" [ae8c6989-5de4-41bd-9098-1924532b7ff8] Running
	I0924 19:47:27.150094   69408 system_pods.go:61] "kube-scheduler-embed-certs-311319" [2f7427ff-479c-4f36-b27f-cfbf76e26201] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:47:27.150103   69408 system_pods.go:61] "metrics-server-6867b74b74-jfrhm" [b0e8ee4e-c2c6-4379-85ca-805cd3ce6371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:47:27.150107   69408 system_pods.go:61] "storage-provisioner" [b61b6e53-23ad-4cee-8eaa-8195dc6e67b8] Running
	I0924 19:47:27.150115   69408 system_pods.go:74] duration metric: took 10.980516ms to wait for pod list to return data ...
	I0924 19:47:27.150123   69408 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:47:27.154040   69408 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:47:27.154061   69408 node_conditions.go:123] node cpu capacity is 2
	I0924 19:47:27.154070   69408 node_conditions.go:105] duration metric: took 3.94208ms to run NodePressure ...
	I0924 19:47:27.154083   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:27.413841   69408 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:47:27.419186   69408 kubeadm.go:739] kubelet initialised
	I0924 19:47:27.419208   69408 kubeadm.go:740] duration metric: took 5.345194ms waiting for restarted kubelet to initialise ...
	I0924 19:47:27.419217   69408 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:47:27.424725   69408 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.429510   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.429529   69408 pod_ready.go:82] duration metric: took 4.780829ms for pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.429537   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.429542   69408 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.434176   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "etcd-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.434200   69408 pod_ready.go:82] duration metric: took 4.647781ms for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.434211   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "etcd-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.434218   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.438323   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.438352   69408 pod_ready.go:82] duration metric: took 4.121619ms for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.438365   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.438377   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.543006   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.543032   69408 pod_ready.go:82] duration metric: took 104.641326ms for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.543046   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.543053   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qk749" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.942331   69408 pod_ready.go:93] pod "kube-proxy-qk749" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:27.942351   69408 pod_ready.go:82] duration metric: took 399.288777ms for pod "kube-proxy-qk749" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.942360   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:29.955819   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:26.830122   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:27.329632   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:27.829858   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:28.329762   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:28.829476   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.330221   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.829642   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:30.329491   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:30.830098   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:31.329499   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.479174   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:31.979161   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:30.490043   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:32.490237   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:32.447718   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:34.948011   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:35.948500   69408 pod_ready.go:93] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:35.948525   69408 pod_ready.go:82] duration metric: took 8.006158098s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:35.948534   69408 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:31.830201   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:32.330017   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:32.829654   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:33.329718   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:33.830007   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.329683   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.829441   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:35.329848   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:35.829899   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:36.330437   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.478344   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.979370   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:34.490525   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.493495   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:38.990185   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:37.955025   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:39.958725   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.830372   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:37.330124   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:37.829745   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:38.329476   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:38.830138   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.329657   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.829850   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:40.330083   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:40.829903   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:41.329650   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.478317   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:41.978220   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:40.990288   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:42.990812   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:42.455130   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:44.954001   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:41.829413   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:42.329658   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:42.829718   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:43.330413   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:43.830374   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.329633   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.829479   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:45.330059   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:45.829818   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:46.330216   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.478335   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.977745   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:45.489604   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:47.490196   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.954193   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:48.955025   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.830337   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:47.330269   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:47.829573   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:48.329440   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:48.829923   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.329742   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.829771   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:50.329793   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:50.829379   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:51.329385   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.477310   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.977800   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:49.990388   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:52.490087   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.453967   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:53.454464   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:55.454863   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.829989   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:52.329456   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:52.830395   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:53.330348   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:53.829385   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.329667   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.830290   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:55.330430   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:55.829909   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:56.330041   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.477481   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.978407   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:54.490209   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.989867   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:58.990813   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:57.954303   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:00.454466   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.829842   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:57.329904   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:57.829402   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:58.329848   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:58.830403   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.330062   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.829904   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:00.329651   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:00.829451   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:01.330427   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.479270   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.978099   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.490292   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:03.490598   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:02.955021   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:05.455302   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.830104   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:02.330085   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:02.830241   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:02.830313   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:02.863389   70152 cri.go:89] found id: ""
	I0924 19:48:02.863421   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.863432   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:02.863440   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:02.863501   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:02.903587   70152 cri.go:89] found id: ""
	I0924 19:48:02.903615   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.903627   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:02.903634   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:02.903691   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:02.936090   70152 cri.go:89] found id: ""
	I0924 19:48:02.936117   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.936132   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:02.936138   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:02.936197   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:02.970010   70152 cri.go:89] found id: ""
	I0924 19:48:02.970034   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.970042   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:02.970047   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:02.970094   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:03.005123   70152 cri.go:89] found id: ""
	I0924 19:48:03.005146   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.005156   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:03.005164   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:03.005224   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:03.037142   70152 cri.go:89] found id: ""
	I0924 19:48:03.037185   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.037214   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:03.037223   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:03.037289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:03.071574   70152 cri.go:89] found id: ""
	I0924 19:48:03.071605   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.071616   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:03.071644   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:03.071710   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:03.101682   70152 cri.go:89] found id: ""
	I0924 19:48:03.101710   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.101718   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:03.101727   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:03.101737   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:03.145955   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:03.145982   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:03.194495   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:03.194531   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:03.207309   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:03.207344   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:03.318709   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:03.318736   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:03.318751   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:05.897472   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:05.910569   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:05.910633   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:05.972008   70152 cri.go:89] found id: ""
	I0924 19:48:05.972047   70152 logs.go:276] 0 containers: []
	W0924 19:48:05.972059   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:05.972066   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:05.972128   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:06.021928   70152 cri.go:89] found id: ""
	I0924 19:48:06.021954   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.021961   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:06.021967   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:06.022018   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:06.054871   70152 cri.go:89] found id: ""
	I0924 19:48:06.054910   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.054919   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:06.054924   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:06.054979   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:06.087218   70152 cri.go:89] found id: ""
	I0924 19:48:06.087242   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.087253   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:06.087261   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:06.087312   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:06.120137   70152 cri.go:89] found id: ""
	I0924 19:48:06.120162   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.120170   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:06.120176   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:06.120222   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:06.150804   70152 cri.go:89] found id: ""
	I0924 19:48:06.150842   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.150854   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:06.150862   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:06.150911   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:06.189829   70152 cri.go:89] found id: ""
	I0924 19:48:06.189856   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.189864   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:06.189870   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:06.189920   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:06.224712   70152 cri.go:89] found id: ""
	I0924 19:48:06.224739   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.224747   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:06.224755   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:06.224769   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:06.290644   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:06.290669   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:06.290681   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:06.369393   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:06.369427   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:06.404570   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:06.404601   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:06.456259   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:06.456288   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:04.478140   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:06.478544   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:05.991344   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:08.489768   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:07.954351   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:10.453427   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:08.969378   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:08.982058   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:08.982129   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:09.015453   70152 cri.go:89] found id: ""
	I0924 19:48:09.015475   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.015484   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:09.015489   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:09.015535   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:09.046308   70152 cri.go:89] found id: ""
	I0924 19:48:09.046332   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.046343   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:09.046350   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:09.046412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:09.077263   70152 cri.go:89] found id: ""
	I0924 19:48:09.077296   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.077308   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:09.077315   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:09.077373   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:09.109224   70152 cri.go:89] found id: ""
	I0924 19:48:09.109255   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.109267   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:09.109274   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:09.109342   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:09.144346   70152 cri.go:89] found id: ""
	I0924 19:48:09.144370   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.144378   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:09.144383   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:09.144434   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:09.175798   70152 cri.go:89] found id: ""
	I0924 19:48:09.175827   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.175843   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:09.175854   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:09.175923   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:09.211912   70152 cri.go:89] found id: ""
	I0924 19:48:09.211935   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.211942   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:09.211948   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:09.211996   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:09.242068   70152 cri.go:89] found id: ""
	I0924 19:48:09.242099   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.242110   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:09.242121   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:09.242134   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:09.306677   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:09.306696   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:09.306707   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:09.384544   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:09.384598   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:09.419555   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:09.419583   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:09.470699   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:09.470731   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:08.977847   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:11.477629   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:10.491124   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:12.990300   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:12.455219   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:14.455548   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:11.984355   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:11.997823   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:11.997879   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:12.029976   70152 cri.go:89] found id: ""
	I0924 19:48:12.030009   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.030021   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:12.030041   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:12.030187   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:12.061131   70152 cri.go:89] found id: ""
	I0924 19:48:12.061157   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.061165   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:12.061170   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:12.061223   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:12.091952   70152 cri.go:89] found id: ""
	I0924 19:48:12.091978   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.091986   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:12.091992   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:12.092039   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:12.127561   70152 cri.go:89] found id: ""
	I0924 19:48:12.127586   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.127597   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:12.127604   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:12.127688   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:12.157342   70152 cri.go:89] found id: ""
	I0924 19:48:12.157363   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.157371   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:12.157377   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:12.157449   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:12.188059   70152 cri.go:89] found id: ""
	I0924 19:48:12.188090   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.188101   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:12.188109   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:12.188163   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:12.222357   70152 cri.go:89] found id: ""
	I0924 19:48:12.222380   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.222388   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:12.222398   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:12.222456   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:12.252715   70152 cri.go:89] found id: ""
	I0924 19:48:12.252736   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.252743   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:12.252751   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:12.252761   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:12.302913   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:12.302943   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:12.315812   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:12.315840   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:12.392300   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:12.392322   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:12.392333   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:12.475042   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:12.475081   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:15.013852   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:15.026515   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:15.026586   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:15.057967   70152 cri.go:89] found id: ""
	I0924 19:48:15.057993   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.058001   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:15.058008   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:15.058063   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:15.092822   70152 cri.go:89] found id: ""
	I0924 19:48:15.092852   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.092860   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:15.092866   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:15.092914   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:15.127847   70152 cri.go:89] found id: ""
	I0924 19:48:15.127875   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.127884   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:15.127889   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:15.127941   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:15.159941   70152 cri.go:89] found id: ""
	I0924 19:48:15.159967   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.159975   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:15.159981   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:15.160035   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:15.192384   70152 cri.go:89] found id: ""
	I0924 19:48:15.192411   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.192422   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:15.192428   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:15.192481   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:15.225446   70152 cri.go:89] found id: ""
	I0924 19:48:15.225472   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.225482   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:15.225488   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:15.225546   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:15.257292   70152 cri.go:89] found id: ""
	I0924 19:48:15.257312   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.257320   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:15.257326   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:15.257377   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:15.288039   70152 cri.go:89] found id: ""
	I0924 19:48:15.288073   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.288085   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:15.288096   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:15.288110   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:15.300593   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:15.300619   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:15.365453   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:15.365482   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:15.365497   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:15.442405   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:15.442440   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:15.481003   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:15.481033   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:13.978638   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.477631   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:14.990464   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.991280   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.954405   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:18.955055   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:18.031802   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:18.044013   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:18.044070   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:18.076333   70152 cri.go:89] found id: ""
	I0924 19:48:18.076357   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.076365   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:18.076371   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:18.076421   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:18.110333   70152 cri.go:89] found id: ""
	I0924 19:48:18.110367   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.110379   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:18.110386   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:18.110457   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:18.142730   70152 cri.go:89] found id: ""
	I0924 19:48:18.142755   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.142763   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:18.142769   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:18.142848   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:18.174527   70152 cri.go:89] found id: ""
	I0924 19:48:18.174551   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.174561   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:18.174568   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:18.174623   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:18.213873   70152 cri.go:89] found id: ""
	I0924 19:48:18.213904   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.213916   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:18.213923   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:18.214019   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:18.247037   70152 cri.go:89] found id: ""
	I0924 19:48:18.247069   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.247079   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:18.247087   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:18.247167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:18.278275   70152 cri.go:89] found id: ""
	I0924 19:48:18.278302   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.278313   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:18.278319   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:18.278377   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:18.311651   70152 cri.go:89] found id: ""
	I0924 19:48:18.311679   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.311690   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:18.311702   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:18.311714   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:18.365113   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:18.365144   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:18.378675   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:18.378702   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:18.450306   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:18.450339   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:18.450353   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:18.529373   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:18.529420   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:21.065169   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:21.077517   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:21.077579   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:21.112639   70152 cri.go:89] found id: ""
	I0924 19:48:21.112663   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.112671   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:21.112677   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:21.112729   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:21.144587   70152 cri.go:89] found id: ""
	I0924 19:48:21.144608   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.144616   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:21.144625   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:21.144675   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:21.175675   70152 cri.go:89] found id: ""
	I0924 19:48:21.175697   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.175705   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:21.175710   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:21.175760   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:21.207022   70152 cri.go:89] found id: ""
	I0924 19:48:21.207044   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.207053   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:21.207058   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:21.207108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:21.238075   70152 cri.go:89] found id: ""
	I0924 19:48:21.238106   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.238118   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:21.238125   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:21.238188   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:21.269998   70152 cri.go:89] found id: ""
	I0924 19:48:21.270030   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.270040   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:21.270048   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:21.270108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:21.301274   70152 cri.go:89] found id: ""
	I0924 19:48:21.301303   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.301315   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:21.301323   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:21.301389   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:21.332082   70152 cri.go:89] found id: ""
	I0924 19:48:21.332107   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.332115   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:21.332123   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:21.332133   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:21.383713   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:21.383759   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:21.396926   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:21.396950   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:21.465280   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:21.465306   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:21.465321   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:18.477865   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:20.978484   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:19.491021   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.993922   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.454663   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:23.455041   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:25.954094   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.544724   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:21.544760   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:24.083632   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:24.095853   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:24.095909   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:24.126692   70152 cri.go:89] found id: ""
	I0924 19:48:24.126718   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.126732   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:24.126739   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:24.126794   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:24.157451   70152 cri.go:89] found id: ""
	I0924 19:48:24.157478   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.157490   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:24.157498   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:24.157548   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:24.188313   70152 cri.go:89] found id: ""
	I0924 19:48:24.188340   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.188351   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:24.188359   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:24.188406   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:24.218240   70152 cri.go:89] found id: ""
	I0924 19:48:24.218271   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.218283   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:24.218291   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:24.218348   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:24.249281   70152 cri.go:89] found id: ""
	I0924 19:48:24.249313   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.249324   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:24.249331   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:24.249391   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:24.280160   70152 cri.go:89] found id: ""
	I0924 19:48:24.280182   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.280189   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:24.280194   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:24.280246   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:24.310699   70152 cri.go:89] found id: ""
	I0924 19:48:24.310726   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.310735   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:24.310740   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:24.310792   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:24.346673   70152 cri.go:89] found id: ""
	I0924 19:48:24.346703   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.346715   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:24.346725   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:24.346738   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:24.396068   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:24.396100   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:24.408987   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:24.409014   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:24.477766   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:24.477792   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:24.477805   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:24.556507   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:24.556539   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:23.477283   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:25.477770   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.478124   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:24.491040   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:26.990109   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.954634   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:29.954918   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.099161   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:27.110953   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:27.111027   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:27.143812   70152 cri.go:89] found id: ""
	I0924 19:48:27.143838   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.143846   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:27.143852   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:27.143909   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:27.173741   70152 cri.go:89] found id: ""
	I0924 19:48:27.173766   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.173775   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:27.173780   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:27.173835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:27.203089   70152 cri.go:89] found id: ""
	I0924 19:48:27.203118   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.203128   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:27.203135   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:27.203197   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:27.234206   70152 cri.go:89] found id: ""
	I0924 19:48:27.234232   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.234240   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:27.234247   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:27.234298   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:27.265173   70152 cri.go:89] found id: ""
	I0924 19:48:27.265199   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.265207   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:27.265213   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:27.265274   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:27.294683   70152 cri.go:89] found id: ""
	I0924 19:48:27.294711   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.294722   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:27.294737   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:27.294800   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:27.327766   70152 cri.go:89] found id: ""
	I0924 19:48:27.327796   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.327804   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:27.327810   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:27.327867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:27.358896   70152 cri.go:89] found id: ""
	I0924 19:48:27.358922   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.358932   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:27.358943   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:27.358958   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:27.407245   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:27.407281   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:27.420301   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:27.420333   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:27.483150   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:27.483175   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:27.483190   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:27.558952   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:27.558988   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:30.094672   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:30.107997   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:30.108061   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:30.141210   70152 cri.go:89] found id: ""
	I0924 19:48:30.141238   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.141248   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:30.141256   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:30.141319   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:30.173799   70152 cri.go:89] found id: ""
	I0924 19:48:30.173825   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.173833   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:30.173839   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:30.173900   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:30.206653   70152 cri.go:89] found id: ""
	I0924 19:48:30.206676   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.206684   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:30.206690   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:30.206739   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:30.245268   70152 cri.go:89] found id: ""
	I0924 19:48:30.245296   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.245351   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:30.245363   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:30.245424   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:30.277515   70152 cri.go:89] found id: ""
	I0924 19:48:30.277550   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.277570   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:30.277578   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:30.277646   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:30.309533   70152 cri.go:89] found id: ""
	I0924 19:48:30.309556   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.309564   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:30.309576   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:30.309641   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:30.342113   70152 cri.go:89] found id: ""
	I0924 19:48:30.342133   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.342140   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:30.342146   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:30.342204   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:30.377786   70152 cri.go:89] found id: ""
	I0924 19:48:30.377818   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.377827   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:30.377835   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:30.377846   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:30.429612   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:30.429660   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:30.442864   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:30.442892   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:30.508899   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:30.508917   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:30.508928   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:30.585285   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:30.585316   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:29.978453   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:32.478565   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:29.489398   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:31.490231   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:33.490730   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:32.454775   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:34.455023   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:33.125617   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:33.137771   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:33.137847   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:33.169654   70152 cri.go:89] found id: ""
	I0924 19:48:33.169684   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.169694   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:33.169703   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:33.169769   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:33.205853   70152 cri.go:89] found id: ""
	I0924 19:48:33.205877   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.205884   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:33.205890   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:33.205947   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:33.239008   70152 cri.go:89] found id: ""
	I0924 19:48:33.239037   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.239048   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:33.239056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:33.239114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:33.269045   70152 cri.go:89] found id: ""
	I0924 19:48:33.269077   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.269088   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:33.269096   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:33.269158   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:33.298553   70152 cri.go:89] found id: ""
	I0924 19:48:33.298583   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.298594   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:33.298602   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:33.298663   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:33.329077   70152 cri.go:89] found id: ""
	I0924 19:48:33.329103   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.329114   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:33.329122   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:33.329181   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:33.361366   70152 cri.go:89] found id: ""
	I0924 19:48:33.361397   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.361408   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:33.361416   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:33.361465   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:33.394899   70152 cri.go:89] found id: ""
	I0924 19:48:33.394941   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.394952   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:33.394964   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:33.394978   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:33.446878   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:33.446917   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:33.460382   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:33.460408   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:33.530526   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:33.530546   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:33.530563   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:33.610520   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:33.610559   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:36.152137   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:36.165157   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:36.165225   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:36.196113   70152 cri.go:89] found id: ""
	I0924 19:48:36.196142   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.196151   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:36.196159   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:36.196223   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:36.230743   70152 cri.go:89] found id: ""
	I0924 19:48:36.230770   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.230779   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:36.230786   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:36.230870   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:36.263401   70152 cri.go:89] found id: ""
	I0924 19:48:36.263430   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.263439   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:36.263444   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:36.263492   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:36.298958   70152 cri.go:89] found id: ""
	I0924 19:48:36.298982   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.298991   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:36.298996   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:36.299053   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:36.337604   70152 cri.go:89] found id: ""
	I0924 19:48:36.337636   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.337647   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:36.337654   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:36.337717   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:36.368707   70152 cri.go:89] found id: ""
	I0924 19:48:36.368738   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.368749   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:36.368763   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:36.368833   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:36.400169   70152 cri.go:89] found id: ""
	I0924 19:48:36.400194   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.400204   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:36.400212   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:36.400277   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:36.430959   70152 cri.go:89] found id: ""
	I0924 19:48:36.430987   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.430994   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:36.431003   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:36.431015   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:48:34.478813   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:36.978477   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:35.991034   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:38.489705   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:36.954351   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:39.455405   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	W0924 19:48:36.508356   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:36.508381   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:36.508392   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:36.589376   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:36.589411   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:36.629423   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:36.629453   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:36.679281   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:36.679313   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:39.193627   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:39.207486   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:39.207564   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:39.239864   70152 cri.go:89] found id: ""
	I0924 19:48:39.239888   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.239897   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:39.239902   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:39.239950   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:39.273596   70152 cri.go:89] found id: ""
	I0924 19:48:39.273622   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.273630   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:39.273635   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:39.273685   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:39.305659   70152 cri.go:89] found id: ""
	I0924 19:48:39.305685   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.305696   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:39.305703   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:39.305762   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:39.338060   70152 cri.go:89] found id: ""
	I0924 19:48:39.338091   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.338103   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:39.338110   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:39.338167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:39.369652   70152 cri.go:89] found id: ""
	I0924 19:48:39.369680   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.369688   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:39.369694   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:39.369757   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:39.406342   70152 cri.go:89] found id: ""
	I0924 19:48:39.406365   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.406373   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:39.406379   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:39.406428   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:39.437801   70152 cri.go:89] found id: ""
	I0924 19:48:39.437824   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.437832   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:39.437838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:39.437892   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:39.476627   70152 cri.go:89] found id: ""
	I0924 19:48:39.476651   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.476662   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:39.476672   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:39.476685   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:39.528302   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:39.528332   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:39.540968   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:39.540999   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:39.606690   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:39.606716   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:39.606733   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:39.689060   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:39.689101   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:39.478198   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:41.478531   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:40.489969   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:42.491022   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:41.954586   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:44.454898   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:42.225445   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:42.238188   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:42.238262   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:42.270077   70152 cri.go:89] found id: ""
	I0924 19:48:42.270107   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.270117   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:42.270127   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:42.270189   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:42.301231   70152 cri.go:89] found id: ""
	I0924 19:48:42.301253   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.301261   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:42.301266   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:42.301311   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:42.331554   70152 cri.go:89] found id: ""
	I0924 19:48:42.331586   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.331594   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:42.331602   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:42.331662   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:42.364673   70152 cri.go:89] found id: ""
	I0924 19:48:42.364696   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.364704   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:42.364710   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:42.364755   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:42.396290   70152 cri.go:89] found id: ""
	I0924 19:48:42.396320   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.396331   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:42.396339   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:42.396400   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:42.427249   70152 cri.go:89] found id: ""
	I0924 19:48:42.427277   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.427287   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:42.427295   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:42.427356   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:42.462466   70152 cri.go:89] found id: ""
	I0924 19:48:42.462491   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.462499   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:42.462504   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:42.462557   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:42.496774   70152 cri.go:89] found id: ""
	I0924 19:48:42.496797   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.496805   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:42.496813   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:42.496825   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:42.569996   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:42.570024   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:42.570040   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:42.646881   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:42.646913   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:42.687089   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:42.687112   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:42.739266   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:42.739303   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:45.254320   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:45.266332   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:45.266404   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:45.296893   70152 cri.go:89] found id: ""
	I0924 19:48:45.296923   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.296933   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:45.296940   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:45.297003   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:45.328599   70152 cri.go:89] found id: ""
	I0924 19:48:45.328628   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.328639   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:45.328647   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:45.328704   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:45.361362   70152 cri.go:89] found id: ""
	I0924 19:48:45.361394   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.361404   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:45.361414   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:45.361475   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:45.395296   70152 cri.go:89] found id: ""
	I0924 19:48:45.395341   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.395352   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:45.395360   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:45.395424   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:45.430070   70152 cri.go:89] found id: ""
	I0924 19:48:45.430092   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.430100   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:45.430106   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:45.430151   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:45.463979   70152 cri.go:89] found id: ""
	I0924 19:48:45.464005   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.464015   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:45.464023   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:45.464085   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:45.512245   70152 cri.go:89] found id: ""
	I0924 19:48:45.512276   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.512286   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:45.512293   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:45.512353   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:45.544854   70152 cri.go:89] found id: ""
	I0924 19:48:45.544882   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.544891   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:45.544902   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:45.544915   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:45.580352   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:45.580390   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:45.630992   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:45.631025   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:45.643908   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:45.643936   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:45.715669   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:45.715689   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:45.715703   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:43.478814   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:45.978275   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:44.990088   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:46.990498   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:46.954696   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:49.455032   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:48.296204   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:48.308612   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:48.308675   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:48.339308   70152 cri.go:89] found id: ""
	I0924 19:48:48.339335   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.339345   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:48.339353   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:48.339412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:48.377248   70152 cri.go:89] found id: ""
	I0924 19:48:48.377277   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.377286   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:48.377292   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:48.377354   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:48.414199   70152 cri.go:89] found id: ""
	I0924 19:48:48.414230   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.414238   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:48.414244   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:48.414293   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:48.446262   70152 cri.go:89] found id: ""
	I0924 19:48:48.446291   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.446302   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:48.446309   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:48.446369   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:48.477125   70152 cri.go:89] found id: ""
	I0924 19:48:48.477155   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.477166   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:48.477174   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:48.477233   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:48.520836   70152 cri.go:89] found id: ""
	I0924 19:48:48.520867   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.520876   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:48.520881   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:48.520936   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:48.557787   70152 cri.go:89] found id: ""
	I0924 19:48:48.557818   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.557829   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:48.557838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:48.557897   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:48.589636   70152 cri.go:89] found id: ""
	I0924 19:48:48.589670   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.589682   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:48.589692   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:48.589706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:48.667455   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:48.667486   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:48.704523   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:48.704559   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:48.754194   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:48.754223   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:48.766550   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:48.766576   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:48.833394   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:51.333900   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:51.347028   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:51.347094   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:51.383250   70152 cri.go:89] found id: ""
	I0924 19:48:51.383277   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.383285   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:51.383292   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:51.383356   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:51.415238   70152 cri.go:89] found id: ""
	I0924 19:48:51.415269   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.415282   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:51.415289   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:51.415349   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:51.447358   70152 cri.go:89] found id: ""
	I0924 19:48:51.447388   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.447398   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:51.447407   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:51.447469   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:51.479317   70152 cri.go:89] found id: ""
	I0924 19:48:51.479345   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.479354   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:51.479362   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:51.479423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:48.477928   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:50.978108   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:49.491597   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.989509   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:53.989629   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.954573   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:54.455024   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.511976   70152 cri.go:89] found id: ""
	I0924 19:48:51.512008   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.512016   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:51.512022   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:51.512074   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:51.544785   70152 cri.go:89] found id: ""
	I0924 19:48:51.544816   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.544824   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:51.544834   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:51.544896   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:51.577475   70152 cri.go:89] found id: ""
	I0924 19:48:51.577508   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.577519   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:51.577527   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:51.577599   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:51.612499   70152 cri.go:89] found id: ""
	I0924 19:48:51.612529   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.612540   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:51.612551   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:51.612564   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:51.648429   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:51.648456   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:51.699980   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:51.700010   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:51.714695   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:51.714723   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:51.781872   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:51.781894   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:51.781909   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:54.361191   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:54.373189   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:54.373242   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:54.405816   70152 cri.go:89] found id: ""
	I0924 19:48:54.405844   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.405854   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:54.405862   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:54.405924   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:54.437907   70152 cri.go:89] found id: ""
	I0924 19:48:54.437935   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.437945   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:54.437952   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:54.438013   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:54.472020   70152 cri.go:89] found id: ""
	I0924 19:48:54.472042   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.472054   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:54.472061   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:54.472122   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:54.507185   70152 cri.go:89] found id: ""
	I0924 19:48:54.507206   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.507215   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:54.507220   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:54.507269   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:54.540854   70152 cri.go:89] found id: ""
	I0924 19:48:54.540887   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.540898   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:54.540905   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:54.540973   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:54.572764   70152 cri.go:89] found id: ""
	I0924 19:48:54.572805   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.572816   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:54.572824   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:54.572897   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:54.605525   70152 cri.go:89] found id: ""
	I0924 19:48:54.605565   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.605573   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:54.605579   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:54.605652   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:54.637320   70152 cri.go:89] found id: ""
	I0924 19:48:54.637341   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.637350   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:54.637357   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:54.637367   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:54.691398   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:54.691433   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:54.704780   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:54.704805   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:54.779461   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:54.779487   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:54.779502   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:54.858131   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:54.858168   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:52.978487   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:55.477749   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:57.479091   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:55.989883   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:58.490132   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:56.954088   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:58.954576   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:00.955423   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:57.393677   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:57.406202   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:57.406273   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:57.439351   70152 cri.go:89] found id: ""
	I0924 19:48:57.439381   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.439388   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:57.439394   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:57.439440   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:57.476966   70152 cri.go:89] found id: ""
	I0924 19:48:57.476993   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.477002   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:57.477007   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:57.477064   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:57.510947   70152 cri.go:89] found id: ""
	I0924 19:48:57.510975   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.510986   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:57.510994   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:57.511054   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:57.544252   70152 cri.go:89] found id: ""
	I0924 19:48:57.544277   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.544285   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:57.544292   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:57.544342   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:57.576781   70152 cri.go:89] found id: ""
	I0924 19:48:57.576810   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.576821   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:57.576829   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:57.576892   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:57.614243   70152 cri.go:89] found id: ""
	I0924 19:48:57.614269   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.614277   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:57.614283   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:57.614349   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:57.653477   70152 cri.go:89] found id: ""
	I0924 19:48:57.653506   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.653517   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:57.653524   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:57.653598   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:57.701253   70152 cri.go:89] found id: ""
	I0924 19:48:57.701283   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.701291   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:57.701299   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:57.701311   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:57.721210   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:57.721239   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:57.799693   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:57.799720   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:57.799735   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:57.881561   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:57.881597   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:57.917473   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:57.917506   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:00.471475   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:00.485727   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:00.485801   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:00.518443   70152 cri.go:89] found id: ""
	I0924 19:49:00.518472   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.518483   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:00.518490   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:00.518555   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:00.553964   70152 cri.go:89] found id: ""
	I0924 19:49:00.553991   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.554001   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:00.554009   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:00.554074   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:00.585507   70152 cri.go:89] found id: ""
	I0924 19:49:00.585537   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.585548   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:00.585555   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:00.585614   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:00.618214   70152 cri.go:89] found id: ""
	I0924 19:49:00.618242   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.618253   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:00.618260   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:00.618319   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:00.649042   70152 cri.go:89] found id: ""
	I0924 19:49:00.649069   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.649077   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:00.649083   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:00.649133   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:00.681021   70152 cri.go:89] found id: ""
	I0924 19:49:00.681050   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.681060   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:00.681067   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:00.681128   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:00.712608   70152 cri.go:89] found id: ""
	I0924 19:49:00.712631   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.712640   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:00.712646   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:00.712693   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:00.744523   70152 cri.go:89] found id: ""
	I0924 19:49:00.744561   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.744572   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:00.744584   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:00.744604   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:00.757179   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:00.757202   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:00.822163   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:00.822186   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:00.822197   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:00.897080   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:00.897125   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:00.934120   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:00.934149   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:59.977468   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:01.978394   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:00.491533   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:02.990346   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:03.454971   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:05.954492   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:03.487555   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:03.500318   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:03.500372   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:03.531327   70152 cri.go:89] found id: ""
	I0924 19:49:03.531355   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.531364   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:03.531372   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:03.531437   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:03.563445   70152 cri.go:89] found id: ""
	I0924 19:49:03.563480   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.563491   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:03.563498   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:03.563564   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:03.602093   70152 cri.go:89] found id: ""
	I0924 19:49:03.602118   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.602126   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:03.602134   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:03.602184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:03.633729   70152 cri.go:89] found id: ""
	I0924 19:49:03.633758   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.633769   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:03.633777   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:03.633838   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:03.664122   70152 cri.go:89] found id: ""
	I0924 19:49:03.664144   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.664154   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:03.664162   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:03.664227   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:03.697619   70152 cri.go:89] found id: ""
	I0924 19:49:03.697647   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.697656   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:03.697661   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:03.697714   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:03.729679   70152 cri.go:89] found id: ""
	I0924 19:49:03.729706   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.729714   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:03.729719   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:03.729768   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:03.760459   70152 cri.go:89] found id: ""
	I0924 19:49:03.760489   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.760497   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:03.760505   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:03.760517   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:03.772452   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:03.772475   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:03.836658   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:03.836690   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:03.836706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:03.911243   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:03.911274   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:03.947676   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:03.947699   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:04.478117   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:06.977766   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:04.992137   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:07.490741   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:07.955747   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:10.453756   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:06.501947   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:06.513963   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:06.514037   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:06.546355   70152 cri.go:89] found id: ""
	I0924 19:49:06.546382   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.546393   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:06.546401   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:06.546460   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:06.577502   70152 cri.go:89] found id: ""
	I0924 19:49:06.577530   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.577542   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:06.577554   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:06.577606   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:06.611622   70152 cri.go:89] found id: ""
	I0924 19:49:06.611644   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.611652   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:06.611658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:06.611716   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:06.646558   70152 cri.go:89] found id: ""
	I0924 19:49:06.646581   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.646589   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:06.646594   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:06.646656   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:06.678247   70152 cri.go:89] found id: ""
	I0924 19:49:06.678271   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.678282   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:06.678289   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:06.678351   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:06.718816   70152 cri.go:89] found id: ""
	I0924 19:49:06.718861   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.718874   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:06.718889   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:06.718952   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:06.751762   70152 cri.go:89] found id: ""
	I0924 19:49:06.751787   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.751798   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:06.751806   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:06.751867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:06.783466   70152 cri.go:89] found id: ""
	I0924 19:49:06.783494   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.783502   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:06.783511   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:06.783523   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:06.796746   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:06.796773   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:06.860579   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:06.860608   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:06.860627   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:06.933363   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:06.933394   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:06.973189   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:06.973214   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:09.525823   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:09.537933   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:09.537986   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:09.568463   70152 cri.go:89] found id: ""
	I0924 19:49:09.568492   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.568503   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:09.568511   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:09.568566   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:09.598218   70152 cri.go:89] found id: ""
	I0924 19:49:09.598250   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.598261   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:09.598268   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:09.598325   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:09.631792   70152 cri.go:89] found id: ""
	I0924 19:49:09.631817   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.631828   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:09.631839   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:09.631906   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:09.668544   70152 cri.go:89] found id: ""
	I0924 19:49:09.668578   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.668586   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:09.668592   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:09.668643   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:09.699088   70152 cri.go:89] found id: ""
	I0924 19:49:09.699117   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.699126   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:09.699132   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:09.699192   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:09.731239   70152 cri.go:89] found id: ""
	I0924 19:49:09.731262   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.731273   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:09.731280   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:09.731341   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:09.764349   70152 cri.go:89] found id: ""
	I0924 19:49:09.764372   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.764380   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:09.764386   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:09.764443   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:09.795675   70152 cri.go:89] found id: ""
	I0924 19:49:09.795698   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.795707   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:09.795715   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:09.795733   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:09.829109   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:09.829133   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:09.882630   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:09.882666   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:09.894968   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:09.894992   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:09.955378   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:09.955400   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:09.955415   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:09.477323   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:11.477732   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:09.991122   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.490229   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.453790   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:14.454415   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.537431   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:12.549816   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:12.549878   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:12.585422   70152 cri.go:89] found id: ""
	I0924 19:49:12.585445   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.585453   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:12.585459   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:12.585505   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:12.621367   70152 cri.go:89] found id: ""
	I0924 19:49:12.621391   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.621401   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:12.621408   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:12.621471   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:12.656570   70152 cri.go:89] found id: ""
	I0924 19:49:12.656596   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.656603   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:12.656611   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:12.656671   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:12.691193   70152 cri.go:89] found id: ""
	I0924 19:49:12.691215   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.691225   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:12.691233   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:12.691291   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:12.725507   70152 cri.go:89] found id: ""
	I0924 19:49:12.725535   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.725546   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:12.725554   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:12.725614   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:12.757046   70152 cri.go:89] found id: ""
	I0924 19:49:12.757072   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.757083   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:12.757091   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:12.757148   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:12.787049   70152 cri.go:89] found id: ""
	I0924 19:49:12.787075   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.787083   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:12.787088   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:12.787136   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:12.820797   70152 cri.go:89] found id: ""
	I0924 19:49:12.820823   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.820831   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:12.820841   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:12.820859   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:12.873430   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:12.873462   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:12.886207   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:12.886234   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:12.957602   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:12.957623   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:12.957637   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:13.034776   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:13.034811   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:15.571177   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:15.583916   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:15.583981   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:15.618698   70152 cri.go:89] found id: ""
	I0924 19:49:15.618722   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.618730   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:15.618735   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:15.618787   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:15.653693   70152 cri.go:89] found id: ""
	I0924 19:49:15.653726   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.653747   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:15.653755   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:15.653817   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:15.683926   70152 cri.go:89] found id: ""
	I0924 19:49:15.683955   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.683966   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:15.683974   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:15.684031   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:15.718671   70152 cri.go:89] found id: ""
	I0924 19:49:15.718704   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.718716   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:15.718724   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:15.718784   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:15.748861   70152 cri.go:89] found id: ""
	I0924 19:49:15.748892   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.748904   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:15.748911   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:15.748985   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:15.778209   70152 cri.go:89] found id: ""
	I0924 19:49:15.778241   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.778252   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:15.778259   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:15.778323   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:15.808159   70152 cri.go:89] found id: ""
	I0924 19:49:15.808184   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.808192   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:15.808197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:15.808257   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:15.840960   70152 cri.go:89] found id: ""
	I0924 19:49:15.840987   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.840995   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:15.841003   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:15.841016   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:15.891229   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:15.891259   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:15.903910   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:15.903935   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:15.967036   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:15.967061   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:15.967074   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:16.046511   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:16.046545   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:13.477971   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:15.478378   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:14.990141   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:16.990237   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.990750   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:16.954729   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.954769   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.586369   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:18.598590   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:18.598680   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:18.631438   70152 cri.go:89] found id: ""
	I0924 19:49:18.631465   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.631476   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:18.631484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:18.631545   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:18.663461   70152 cri.go:89] found id: ""
	I0924 19:49:18.663484   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.663491   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:18.663497   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:18.663556   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:18.696292   70152 cri.go:89] found id: ""
	I0924 19:49:18.696373   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.696398   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:18.696411   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:18.696475   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:18.728037   70152 cri.go:89] found id: ""
	I0924 19:49:18.728062   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.728073   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:18.728079   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:18.728139   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:18.759784   70152 cri.go:89] found id: ""
	I0924 19:49:18.759819   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.759830   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:18.759838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:18.759902   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:18.791856   70152 cri.go:89] found id: ""
	I0924 19:49:18.791886   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.791893   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:18.791899   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:18.791959   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:18.822678   70152 cri.go:89] found id: ""
	I0924 19:49:18.822708   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.822719   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:18.822730   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:18.822794   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:18.852967   70152 cri.go:89] found id: ""
	I0924 19:49:18.852988   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.852996   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:18.853005   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:18.853016   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:18.902600   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:18.902634   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:18.915475   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:18.915505   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:18.980260   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:18.980285   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:18.980299   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:19.064950   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:19.064986   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:17.977250   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:19.977563   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.977702   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.490563   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:23.989915   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.454031   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:23.954281   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:25.955057   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.603752   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:21.616039   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:21.616107   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:21.648228   70152 cri.go:89] found id: ""
	I0924 19:49:21.648253   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.648266   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:21.648274   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:21.648331   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:21.679823   70152 cri.go:89] found id: ""
	I0924 19:49:21.679850   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.679858   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:21.679866   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:21.679928   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:21.712860   70152 cri.go:89] found id: ""
	I0924 19:49:21.712886   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.712895   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:21.712900   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:21.712951   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:21.749711   70152 cri.go:89] found id: ""
	I0924 19:49:21.749735   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.749742   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:21.749748   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:21.749793   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:21.784536   70152 cri.go:89] found id: ""
	I0924 19:49:21.784559   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.784567   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:21.784573   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:21.784631   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:21.813864   70152 cri.go:89] found id: ""
	I0924 19:49:21.813896   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.813907   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:21.813916   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:21.813981   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:21.843610   70152 cri.go:89] found id: ""
	I0924 19:49:21.843639   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.843647   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:21.843653   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:21.843704   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:21.874367   70152 cri.go:89] found id: ""
	I0924 19:49:21.874393   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.874401   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:21.874410   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:21.874421   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:21.923539   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:21.923567   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:21.936994   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:21.937018   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:22.004243   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:22.004264   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:22.004277   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:22.079890   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:22.079921   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:24.616140   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:24.628197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:24.628257   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:24.660873   70152 cri.go:89] found id: ""
	I0924 19:49:24.660902   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.660912   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:24.660919   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:24.660978   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:24.691592   70152 cri.go:89] found id: ""
	I0924 19:49:24.691618   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.691627   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:24.691633   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:24.691682   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:24.725803   70152 cri.go:89] found id: ""
	I0924 19:49:24.725835   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.725843   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:24.725849   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:24.725911   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:24.760080   70152 cri.go:89] found id: ""
	I0924 19:49:24.760112   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.760124   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:24.760131   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:24.760198   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:24.792487   70152 cri.go:89] found id: ""
	I0924 19:49:24.792517   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.792527   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:24.792535   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:24.792615   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:24.825037   70152 cri.go:89] found id: ""
	I0924 19:49:24.825058   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.825066   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:24.825072   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:24.825117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:24.857009   70152 cri.go:89] found id: ""
	I0924 19:49:24.857037   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.857048   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:24.857062   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:24.857119   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:24.887963   70152 cri.go:89] found id: ""
	I0924 19:49:24.887986   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.887994   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:24.888001   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:24.888012   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:24.941971   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:24.942008   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:24.956355   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:24.956385   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:25.020643   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:25.020671   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:25.020686   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:25.095261   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:25.095295   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:24.477423   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:26.477967   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:25.990406   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:28.490276   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:28.454466   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.955002   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:27.632228   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:27.645002   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:27.645059   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:27.677386   70152 cri.go:89] found id: ""
	I0924 19:49:27.677411   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.677420   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:27.677427   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:27.677487   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:27.709731   70152 cri.go:89] found id: ""
	I0924 19:49:27.709760   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.709771   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:27.709779   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:27.709846   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:27.741065   70152 cri.go:89] found id: ""
	I0924 19:49:27.741092   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.741100   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:27.741106   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:27.741165   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:27.771493   70152 cri.go:89] found id: ""
	I0924 19:49:27.771515   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.771524   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:27.771531   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:27.771592   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:27.803233   70152 cri.go:89] found id: ""
	I0924 19:49:27.803266   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.803273   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:27.803279   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:27.803341   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:27.837295   70152 cri.go:89] found id: ""
	I0924 19:49:27.837320   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.837331   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:27.837341   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:27.837412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:27.867289   70152 cri.go:89] found id: ""
	I0924 19:49:27.867314   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.867323   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:27.867328   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:27.867374   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:27.896590   70152 cri.go:89] found id: ""
	I0924 19:49:27.896615   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.896623   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:27.896634   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:27.896646   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:27.944564   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:27.944596   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:27.958719   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:27.958740   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:28.028986   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:28.029011   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:28.029027   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:28.103888   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:28.103920   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:30.639148   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:30.651500   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:30.651570   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:30.689449   70152 cri.go:89] found id: ""
	I0924 19:49:30.689472   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.689481   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:30.689488   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:30.689566   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:30.722953   70152 cri.go:89] found id: ""
	I0924 19:49:30.722982   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.722993   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:30.723004   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:30.723057   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:30.760960   70152 cri.go:89] found id: ""
	I0924 19:49:30.760985   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.760996   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:30.761004   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:30.761066   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:30.794784   70152 cri.go:89] found id: ""
	I0924 19:49:30.794812   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.794821   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:30.794842   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:30.794894   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:30.826127   70152 cri.go:89] found id: ""
	I0924 19:49:30.826155   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.826164   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:30.826172   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:30.826235   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:30.857392   70152 cri.go:89] found id: ""
	I0924 19:49:30.857422   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.857432   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:30.857446   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:30.857510   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:30.887561   70152 cri.go:89] found id: ""
	I0924 19:49:30.887588   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.887600   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:30.887622   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:30.887692   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:30.922486   70152 cri.go:89] found id: ""
	I0924 19:49:30.922514   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.922526   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:30.922537   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:30.922551   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:30.972454   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:30.972480   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:30.986873   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:30.986895   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:31.060505   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:31.060525   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:31.060544   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:31.138923   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:31.138955   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:28.977756   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.980419   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.989909   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:32.991815   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:33.454204   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.454890   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:33.674979   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:33.687073   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:33.687149   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:33.719712   70152 cri.go:89] found id: ""
	I0924 19:49:33.719742   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.719751   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:33.719757   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:33.719810   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:33.751183   70152 cri.go:89] found id: ""
	I0924 19:49:33.751210   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.751221   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:33.751229   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:33.751274   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:33.781748   70152 cri.go:89] found id: ""
	I0924 19:49:33.781781   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.781793   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:33.781801   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:33.781873   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:33.813287   70152 cri.go:89] found id: ""
	I0924 19:49:33.813311   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.813319   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:33.813324   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:33.813369   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:33.848270   70152 cri.go:89] found id: ""
	I0924 19:49:33.848299   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.848311   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:33.848319   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:33.848383   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:33.877790   70152 cri.go:89] found id: ""
	I0924 19:49:33.877817   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.877826   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:33.877832   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:33.877890   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:33.911668   70152 cri.go:89] found id: ""
	I0924 19:49:33.911693   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.911701   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:33.911706   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:33.911759   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:33.943924   70152 cri.go:89] found id: ""
	I0924 19:49:33.943952   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.943963   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:33.943974   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:33.943987   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:33.980520   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:33.980560   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:34.031240   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:34.031275   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:34.044180   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:34.044210   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:34.110143   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:34.110165   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:34.110176   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:33.477340   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.478344   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.490449   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:37.989317   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:37.954444   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:39.954569   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:36.694093   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:36.706006   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:36.706080   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:36.738955   70152 cri.go:89] found id: ""
	I0924 19:49:36.738981   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.738990   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:36.738995   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:36.739059   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:36.774414   70152 cri.go:89] found id: ""
	I0924 19:49:36.774437   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.774445   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:36.774451   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:36.774503   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:36.805821   70152 cri.go:89] found id: ""
	I0924 19:49:36.805851   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.805861   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:36.805867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:36.805922   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:36.835128   70152 cri.go:89] found id: ""
	I0924 19:49:36.835154   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.835162   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:36.835168   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:36.835221   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:36.865448   70152 cri.go:89] found id: ""
	I0924 19:49:36.865474   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.865485   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:36.865492   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:36.865552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:36.896694   70152 cri.go:89] found id: ""
	I0924 19:49:36.896722   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.896731   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:36.896736   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:36.896801   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:36.927380   70152 cri.go:89] found id: ""
	I0924 19:49:36.927406   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.927416   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:36.927426   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:36.927484   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:36.957581   70152 cri.go:89] found id: ""
	I0924 19:49:36.957604   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.957614   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:36.957624   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:36.957638   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:37.007182   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:37.007211   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:37.021536   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:37.021561   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:37.092442   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:37.092465   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:37.092477   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:37.167488   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:37.167524   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:39.703778   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:39.715914   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:39.715983   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:39.751296   70152 cri.go:89] found id: ""
	I0924 19:49:39.751319   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.751329   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:39.751341   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:39.751409   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:39.787095   70152 cri.go:89] found id: ""
	I0924 19:49:39.787123   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.787132   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:39.787137   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:39.787184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:39.822142   70152 cri.go:89] found id: ""
	I0924 19:49:39.822164   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.822173   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:39.822179   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:39.822226   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:39.853830   70152 cri.go:89] found id: ""
	I0924 19:49:39.853854   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.853864   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:39.853871   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:39.853932   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:39.891029   70152 cri.go:89] found id: ""
	I0924 19:49:39.891079   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.891091   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:39.891100   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:39.891162   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:39.926162   70152 cri.go:89] found id: ""
	I0924 19:49:39.926194   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.926204   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:39.926211   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:39.926262   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:39.964320   70152 cri.go:89] found id: ""
	I0924 19:49:39.964348   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.964358   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:39.964365   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:39.964421   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:39.997596   70152 cri.go:89] found id: ""
	I0924 19:49:39.997617   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.997627   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:39.997636   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:39.997649   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:40.045538   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:40.045568   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:40.058114   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:40.058139   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:40.125927   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:40.125946   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:40.125958   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:40.202722   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:40.202758   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:37.978393   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:40.476855   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.477425   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:39.990444   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:41.991094   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.454568   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:44.953805   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.742707   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:42.754910   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:42.754986   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:42.788775   70152 cri.go:89] found id: ""
	I0924 19:49:42.788798   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.788807   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:42.788813   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:42.788875   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:42.824396   70152 cri.go:89] found id: ""
	I0924 19:49:42.824420   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.824430   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:42.824436   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:42.824498   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:42.854848   70152 cri.go:89] found id: ""
	I0924 19:49:42.854873   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.854880   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:42.854886   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:42.854936   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:42.885033   70152 cri.go:89] found id: ""
	I0924 19:49:42.885056   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.885063   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:42.885069   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:42.885114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:42.914427   70152 cri.go:89] found id: ""
	I0924 19:49:42.914451   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.914458   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:42.914464   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:42.914509   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:42.954444   70152 cri.go:89] found id: ""
	I0924 19:49:42.954471   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.954481   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:42.954488   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:42.954544   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:42.998183   70152 cri.go:89] found id: ""
	I0924 19:49:42.998207   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.998215   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:42.998220   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:42.998273   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:43.041904   70152 cri.go:89] found id: ""
	I0924 19:49:43.041933   70152 logs.go:276] 0 containers: []
	W0924 19:49:43.041944   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:43.041957   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:43.041973   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:43.091733   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:43.091770   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:43.104674   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:43.104707   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:43.169712   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:43.169732   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:43.169745   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:43.248378   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:43.248409   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:45.790015   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:45.801902   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:45.801972   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:45.833030   70152 cri.go:89] found id: ""
	I0924 19:49:45.833053   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.833061   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:45.833066   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:45.833117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:45.863209   70152 cri.go:89] found id: ""
	I0924 19:49:45.863233   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.863241   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:45.863247   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:45.863307   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:45.893004   70152 cri.go:89] found id: ""
	I0924 19:49:45.893035   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.893045   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:45.893053   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:45.893114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:45.924485   70152 cri.go:89] found id: ""
	I0924 19:49:45.924515   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.924527   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:45.924535   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:45.924593   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:45.956880   70152 cri.go:89] found id: ""
	I0924 19:49:45.956907   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.956914   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:45.956919   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:45.956967   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:45.990579   70152 cri.go:89] found id: ""
	I0924 19:49:45.990602   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.990614   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:45.990622   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:45.990677   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:46.025905   70152 cri.go:89] found id: ""
	I0924 19:49:46.025944   70152 logs.go:276] 0 containers: []
	W0924 19:49:46.025959   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:46.025966   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:46.026028   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:46.057401   70152 cri.go:89] found id: ""
	I0924 19:49:46.057427   70152 logs.go:276] 0 containers: []
	W0924 19:49:46.057438   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:46.057449   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:46.057463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:46.107081   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:46.107115   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:46.121398   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:46.121426   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:46.184370   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:46.184395   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:46.184410   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:46.266061   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:46.266104   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:44.477907   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.478391   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:44.489995   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.989227   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.990995   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.953875   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.955013   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.803970   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:48.816671   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:48.816737   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:48.849566   70152 cri.go:89] found id: ""
	I0924 19:49:48.849628   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.849652   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:48.849660   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:48.849720   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:48.885963   70152 cri.go:89] found id: ""
	I0924 19:49:48.885992   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.885999   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:48.886004   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:48.886054   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:48.921710   70152 cri.go:89] found id: ""
	I0924 19:49:48.921744   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.921755   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:48.921765   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:48.921821   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:48.954602   70152 cri.go:89] found id: ""
	I0924 19:49:48.954639   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.954650   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:48.954658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:48.954718   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:48.988071   70152 cri.go:89] found id: ""
	I0924 19:49:48.988098   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.988109   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:48.988117   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:48.988177   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:49.020475   70152 cri.go:89] found id: ""
	I0924 19:49:49.020503   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.020512   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:49.020519   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:49.020597   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:49.055890   70152 cri.go:89] found id: ""
	I0924 19:49:49.055915   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.055925   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:49.055933   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:49.055999   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:49.092976   70152 cri.go:89] found id: ""
	I0924 19:49:49.093010   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.093022   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:49.093033   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:49.093051   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:49.106598   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:49.106623   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:49.175320   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:49.175349   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:49.175362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:49.252922   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:49.252953   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:49.292364   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:49.292391   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:48.977530   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:50.978078   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.489983   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:53.990114   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.454857   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:53.954413   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:55.955245   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.843520   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:51.855864   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:51.855930   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:51.885300   70152 cri.go:89] found id: ""
	I0924 19:49:51.885329   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.885342   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:51.885350   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:51.885407   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:51.915183   70152 cri.go:89] found id: ""
	I0924 19:49:51.915212   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.915223   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:51.915230   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:51.915286   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:51.944774   70152 cri.go:89] found id: ""
	I0924 19:49:51.944797   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.944807   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:51.944815   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:51.944886   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:51.983691   70152 cri.go:89] found id: ""
	I0924 19:49:51.983718   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.983729   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:51.983737   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:51.983791   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:52.019728   70152 cri.go:89] found id: ""
	I0924 19:49:52.019760   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.019770   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:52.019776   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:52.019835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:52.055405   70152 cri.go:89] found id: ""
	I0924 19:49:52.055435   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.055446   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:52.055453   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:52.055518   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:52.088417   70152 cri.go:89] found id: ""
	I0924 19:49:52.088447   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.088457   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:52.088465   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:52.088527   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:52.119496   70152 cri.go:89] found id: ""
	I0924 19:49:52.119527   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.119539   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:52.119550   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:52.119563   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:52.193494   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:52.193529   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:52.231440   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:52.231464   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:52.281384   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:52.281418   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:52.293893   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:52.293919   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:52.362404   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:54.863156   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:54.876871   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:54.876946   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:54.909444   70152 cri.go:89] found id: ""
	I0924 19:49:54.909471   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.909478   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:54.909484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:54.909536   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:54.939687   70152 cri.go:89] found id: ""
	I0924 19:49:54.939715   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.939726   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:54.939733   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:54.939805   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:54.971156   70152 cri.go:89] found id: ""
	I0924 19:49:54.971180   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.971188   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:54.971193   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:54.971244   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:55.001865   70152 cri.go:89] found id: ""
	I0924 19:49:55.001891   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.001899   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:55.001904   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:55.001961   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:55.032044   70152 cri.go:89] found id: ""
	I0924 19:49:55.032072   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.032084   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:55.032092   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:55.032152   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:55.061644   70152 cri.go:89] found id: ""
	I0924 19:49:55.061667   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.061675   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:55.061681   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:55.061727   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:55.093015   70152 cri.go:89] found id: ""
	I0924 19:49:55.093041   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.093049   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:55.093055   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:55.093121   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:55.126041   70152 cri.go:89] found id: ""
	I0924 19:49:55.126065   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.126073   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:55.126081   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:55.126091   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:55.168803   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:55.168826   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:55.227121   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:55.227158   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:55.249868   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:55.249893   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:55.316401   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:55.316422   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:55.316434   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:52.978705   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:55.478802   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:56.489685   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:58.990273   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:58.453854   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.954407   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:57.898654   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:57.910667   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:57.910728   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:57.942696   70152 cri.go:89] found id: ""
	I0924 19:49:57.942722   70152 logs.go:276] 0 containers: []
	W0924 19:49:57.942730   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:57.942736   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:57.942802   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:57.981222   70152 cri.go:89] found id: ""
	I0924 19:49:57.981244   70152 logs.go:276] 0 containers: []
	W0924 19:49:57.981254   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:57.981261   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:57.981308   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:58.013135   70152 cri.go:89] found id: ""
	I0924 19:49:58.013174   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.013185   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:58.013193   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:58.013255   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:58.048815   70152 cri.go:89] found id: ""
	I0924 19:49:58.048847   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.048859   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:58.048867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:58.048933   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:58.081365   70152 cri.go:89] found id: ""
	I0924 19:49:58.081395   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.081406   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:58.081413   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:58.081478   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:58.112804   70152 cri.go:89] found id: ""
	I0924 19:49:58.112828   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.112838   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:58.112848   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:58.112913   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:58.147412   70152 cri.go:89] found id: ""
	I0924 19:49:58.147448   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.147459   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:58.147467   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:58.147529   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:58.178922   70152 cri.go:89] found id: ""
	I0924 19:49:58.178952   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.178963   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:58.178974   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:58.178993   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:58.250967   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:58.250993   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:58.251011   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:58.329734   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:58.329767   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:58.366692   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:58.366722   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:58.418466   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:58.418503   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:00.931624   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:00.949687   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:00.949756   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:01.004428   70152 cri.go:89] found id: ""
	I0924 19:50:01.004456   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.004464   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:01.004471   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:01.004532   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:01.038024   70152 cri.go:89] found id: ""
	I0924 19:50:01.038050   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.038060   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:01.038065   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:01.038111   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:01.069831   70152 cri.go:89] found id: ""
	I0924 19:50:01.069855   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.069862   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:01.069867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:01.069933   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:01.100918   70152 cri.go:89] found id: ""
	I0924 19:50:01.100944   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.100951   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:01.100957   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:01.101006   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:01.131309   70152 cri.go:89] found id: ""
	I0924 19:50:01.131340   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.131351   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:01.131359   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:01.131419   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:01.161779   70152 cri.go:89] found id: ""
	I0924 19:50:01.161806   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.161817   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:01.161825   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:01.161888   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:01.196626   70152 cri.go:89] found id: ""
	I0924 19:50:01.196655   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.196665   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:01.196672   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:01.196733   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:01.226447   70152 cri.go:89] found id: ""
	I0924 19:50:01.226475   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.226486   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:01.226496   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:01.226510   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:01.279093   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:01.279121   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:01.292435   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:01.292463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:01.360868   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:01.360901   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:01.360917   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:01.442988   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:01.443021   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:57.978989   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.477211   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:02.477451   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.990593   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:03.489738   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:02.955427   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:05.455000   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:03.984021   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:03.997429   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:03.997508   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:04.030344   70152 cri.go:89] found id: ""
	I0924 19:50:04.030374   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.030387   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:04.030395   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:04.030448   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:04.063968   70152 cri.go:89] found id: ""
	I0924 19:50:04.064003   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.064016   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:04.064023   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:04.064083   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:04.097724   70152 cri.go:89] found id: ""
	I0924 19:50:04.097752   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.097764   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:04.097772   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:04.097825   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:04.129533   70152 cri.go:89] found id: ""
	I0924 19:50:04.129570   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.129580   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:04.129588   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:04.129665   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:04.166056   70152 cri.go:89] found id: ""
	I0924 19:50:04.166086   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.166098   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:04.166105   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:04.166164   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:04.200051   70152 cri.go:89] found id: ""
	I0924 19:50:04.200077   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.200087   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:04.200094   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:04.200205   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:04.232647   70152 cri.go:89] found id: ""
	I0924 19:50:04.232671   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.232679   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:04.232686   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:04.232744   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:04.264091   70152 cri.go:89] found id: ""
	I0924 19:50:04.264115   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.264123   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:04.264131   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:04.264140   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:04.313904   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:04.313939   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:04.326759   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:04.326782   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:04.390347   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:04.390372   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:04.390389   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:04.470473   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:04.470509   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:04.478092   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:06.976928   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:05.490259   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.490644   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.954747   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:10.455548   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.009267   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:07.022465   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:07.022534   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:07.053438   70152 cri.go:89] found id: ""
	I0924 19:50:07.053466   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.053476   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:07.053484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:07.053552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:07.085802   70152 cri.go:89] found id: ""
	I0924 19:50:07.085824   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.085833   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:07.085840   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:07.085903   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:07.121020   70152 cri.go:89] found id: ""
	I0924 19:50:07.121043   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.121051   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:07.121056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:07.121108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:07.150529   70152 cri.go:89] found id: ""
	I0924 19:50:07.150557   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.150568   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:07.150576   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:07.150663   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:07.181915   70152 cri.go:89] found id: ""
	I0924 19:50:07.181942   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.181953   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:07.181959   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:07.182021   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:07.215152   70152 cri.go:89] found id: ""
	I0924 19:50:07.215185   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.215195   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:07.215203   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:07.215263   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:07.248336   70152 cri.go:89] found id: ""
	I0924 19:50:07.248365   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.248373   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:07.248378   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:07.248423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:07.281829   70152 cri.go:89] found id: ""
	I0924 19:50:07.281854   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.281862   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:07.281871   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:07.281885   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:07.329674   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:07.329706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:07.342257   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:07.342283   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:07.406426   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:07.406452   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:07.406466   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:07.493765   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:07.493796   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:10.033393   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:10.046435   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:10.046513   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:10.077993   70152 cri.go:89] found id: ""
	I0924 19:50:10.078024   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.078034   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:10.078044   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:10.078108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:10.115200   70152 cri.go:89] found id: ""
	I0924 19:50:10.115232   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.115243   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:10.115251   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:10.115317   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:10.151154   70152 cri.go:89] found id: ""
	I0924 19:50:10.151179   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.151189   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:10.151197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:10.151254   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:10.184177   70152 cri.go:89] found id: ""
	I0924 19:50:10.184204   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.184212   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:10.184218   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:10.184268   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:10.218932   70152 cri.go:89] found id: ""
	I0924 19:50:10.218962   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.218973   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:10.218981   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:10.219042   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:10.250973   70152 cri.go:89] found id: ""
	I0924 19:50:10.251001   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.251012   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:10.251020   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:10.251076   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:10.280296   70152 cri.go:89] found id: ""
	I0924 19:50:10.280319   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.280328   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:10.280333   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:10.280385   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:10.312386   70152 cri.go:89] found id: ""
	I0924 19:50:10.312411   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.312419   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:10.312426   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:10.312437   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:10.377281   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:10.377309   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:10.377326   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:10.451806   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:10.451839   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:10.489154   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:10.489184   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:10.536203   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:10.536233   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:08.977378   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:10.977966   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:09.990141   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:11.990257   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:13.990360   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:12.954861   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.454763   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:13.049785   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:13.062642   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:13.062720   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:13.096627   70152 cri.go:89] found id: ""
	I0924 19:50:13.096658   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.096669   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:13.096680   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:13.096743   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:13.127361   70152 cri.go:89] found id: ""
	I0924 19:50:13.127389   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.127400   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:13.127409   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:13.127468   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:13.160081   70152 cri.go:89] found id: ""
	I0924 19:50:13.160111   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.160123   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:13.160131   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:13.160184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:13.192955   70152 cri.go:89] found id: ""
	I0924 19:50:13.192986   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.192997   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:13.193004   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:13.193057   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:13.230978   70152 cri.go:89] found id: ""
	I0924 19:50:13.231000   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.231008   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:13.231014   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:13.231064   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:13.262146   70152 cri.go:89] found id: ""
	I0924 19:50:13.262179   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.262190   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:13.262198   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:13.262258   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:13.297019   70152 cri.go:89] found id: ""
	I0924 19:50:13.297054   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.297063   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:13.297070   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:13.297117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:13.327009   70152 cri.go:89] found id: ""
	I0924 19:50:13.327037   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.327046   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:13.327057   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:13.327073   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:13.375465   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:13.375493   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:13.389851   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:13.389884   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:13.452486   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:13.452524   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:13.452538   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:13.531372   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:13.531405   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:16.066979   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:16.079767   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:16.079825   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:16.110927   70152 cri.go:89] found id: ""
	I0924 19:50:16.110951   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.110960   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:16.110965   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:16.111011   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:16.142012   70152 cri.go:89] found id: ""
	I0924 19:50:16.142040   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.142050   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:16.142055   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:16.142112   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:16.175039   70152 cri.go:89] found id: ""
	I0924 19:50:16.175068   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.175079   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:16.175086   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:16.175146   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:16.206778   70152 cri.go:89] found id: ""
	I0924 19:50:16.206800   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.206808   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:16.206814   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:16.206890   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:16.237724   70152 cri.go:89] found id: ""
	I0924 19:50:16.237752   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.237763   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:16.237770   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:16.237835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:16.268823   70152 cri.go:89] found id: ""
	I0924 19:50:16.268846   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.268855   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:16.268861   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:16.268931   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:16.301548   70152 cri.go:89] found id: ""
	I0924 19:50:16.301570   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.301578   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:16.301584   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:16.301635   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:16.334781   70152 cri.go:89] found id: ""
	I0924 19:50:16.334812   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.334820   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:16.334844   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:16.334864   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:16.384025   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:16.384057   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:16.396528   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:16.396556   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:16.460428   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:16.460458   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:16.460472   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:12.978203   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.477525   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.478192   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.990394   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.991181   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.955580   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:20.455446   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:16.541109   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:16.541146   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:19.078388   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:19.090964   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:19.091052   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:19.122890   70152 cri.go:89] found id: ""
	I0924 19:50:19.122915   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.122923   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:19.122928   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:19.122988   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:19.155983   70152 cri.go:89] found id: ""
	I0924 19:50:19.156013   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.156024   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:19.156031   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:19.156085   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:19.190366   70152 cri.go:89] found id: ""
	I0924 19:50:19.190389   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.190397   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:19.190403   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:19.190459   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:19.221713   70152 cri.go:89] found id: ""
	I0924 19:50:19.221737   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.221745   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:19.221751   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:19.221809   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:19.256586   70152 cri.go:89] found id: ""
	I0924 19:50:19.256615   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.256625   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:19.256637   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:19.256700   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:19.288092   70152 cri.go:89] found id: ""
	I0924 19:50:19.288119   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.288130   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:19.288141   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:19.288204   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:19.320743   70152 cri.go:89] found id: ""
	I0924 19:50:19.320771   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.320780   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:19.320785   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:19.320837   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:19.352967   70152 cri.go:89] found id: ""
	I0924 19:50:19.352999   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.353009   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:19.353019   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:19.353035   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:19.365690   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:19.365715   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:19.431204   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:19.431229   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:19.431244   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:19.512030   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:19.512063   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:19.549631   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:19.549664   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:19.977859   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:21.978267   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:20.489819   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.490667   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.954178   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:24.954267   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.105290   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:22.117532   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:22.117607   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:22.147959   70152 cri.go:89] found id: ""
	I0924 19:50:22.147983   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.147994   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:22.148002   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:22.148060   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:22.178511   70152 cri.go:89] found id: ""
	I0924 19:50:22.178540   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.178551   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:22.178556   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:22.178603   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:22.210030   70152 cri.go:89] found id: ""
	I0924 19:50:22.210054   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.210061   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:22.210067   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:22.210125   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:22.243010   70152 cri.go:89] found id: ""
	I0924 19:50:22.243037   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.243048   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:22.243056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:22.243117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:22.273021   70152 cri.go:89] found id: ""
	I0924 19:50:22.273051   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.273062   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:22.273069   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:22.273133   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:22.303372   70152 cri.go:89] found id: ""
	I0924 19:50:22.303403   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.303415   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:22.303422   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:22.303481   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:22.335124   70152 cri.go:89] found id: ""
	I0924 19:50:22.335150   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.335158   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:22.335164   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:22.335222   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:22.368230   70152 cri.go:89] found id: ""
	I0924 19:50:22.368255   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.368265   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:22.368276   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:22.368290   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:22.418998   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:22.419031   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:22.431654   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:22.431684   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:22.505336   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:22.505354   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:22.505367   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:22.584941   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:22.584976   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:25.127489   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:25.140142   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:25.140216   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:25.169946   70152 cri.go:89] found id: ""
	I0924 19:50:25.169974   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.169982   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:25.169988   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:25.170049   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:25.203298   70152 cri.go:89] found id: ""
	I0924 19:50:25.203328   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.203349   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:25.203357   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:25.203419   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:25.236902   70152 cri.go:89] found id: ""
	I0924 19:50:25.236930   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.236941   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:25.236949   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:25.237011   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:25.268295   70152 cri.go:89] found id: ""
	I0924 19:50:25.268318   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.268328   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:25.268333   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:25.268388   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:25.299869   70152 cri.go:89] found id: ""
	I0924 19:50:25.299899   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.299911   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:25.299919   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:25.299978   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:25.332373   70152 cri.go:89] found id: ""
	I0924 19:50:25.332400   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.332411   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:25.332418   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:25.332477   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:25.365791   70152 cri.go:89] found id: ""
	I0924 19:50:25.365820   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.365831   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:25.365839   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:25.365904   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:25.398170   70152 cri.go:89] found id: ""
	I0924 19:50:25.398193   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.398201   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:25.398209   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:25.398220   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:25.447933   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:25.447967   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:25.461244   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:25.461269   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:25.528100   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:25.528125   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:25.528138   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:25.603029   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:25.603062   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:24.477585   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:26.477776   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:24.491205   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:26.990562   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:27.454650   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:29.954657   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:28.141635   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:28.154551   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:28.154611   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:28.186275   70152 cri.go:89] found id: ""
	I0924 19:50:28.186299   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.186307   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:28.186312   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:28.186371   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:28.218840   70152 cri.go:89] found id: ""
	I0924 19:50:28.218868   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.218879   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:28.218887   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:28.218955   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:28.253478   70152 cri.go:89] found id: ""
	I0924 19:50:28.253503   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.253512   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:28.253519   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:28.253579   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:28.284854   70152 cri.go:89] found id: ""
	I0924 19:50:28.284888   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.284899   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:28.284908   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:28.284959   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:28.315453   70152 cri.go:89] found id: ""
	I0924 19:50:28.315478   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.315487   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:28.315500   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:28.315550   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:28.347455   70152 cri.go:89] found id: ""
	I0924 19:50:28.347484   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.347492   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:28.347498   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:28.347552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:28.383651   70152 cri.go:89] found id: ""
	I0924 19:50:28.383683   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.383694   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:28.383702   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:28.383766   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:28.424649   70152 cri.go:89] found id: ""
	I0924 19:50:28.424682   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.424693   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:28.424704   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:28.424718   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:28.477985   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:28.478020   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:28.490902   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:28.490930   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:28.561252   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:28.561273   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:28.561284   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:28.635590   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:28.635635   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:31.172062   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:31.184868   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:31.184939   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:31.216419   70152 cri.go:89] found id: ""
	I0924 19:50:31.216446   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.216456   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:31.216464   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:31.216525   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:31.252757   70152 cri.go:89] found id: ""
	I0924 19:50:31.252787   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.252797   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:31.252804   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:31.252867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:31.287792   70152 cri.go:89] found id: ""
	I0924 19:50:31.287820   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.287827   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:31.287833   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:31.287883   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:31.322891   70152 cri.go:89] found id: ""
	I0924 19:50:31.322917   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.322927   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:31.322934   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:31.322997   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:31.358353   70152 cri.go:89] found id: ""
	I0924 19:50:31.358384   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.358394   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:31.358401   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:31.358461   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:31.388617   70152 cri.go:89] found id: ""
	I0924 19:50:31.388643   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.388654   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:31.388661   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:31.388714   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:31.421655   70152 cri.go:89] found id: ""
	I0924 19:50:31.421682   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.421690   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:31.421695   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:31.421747   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:31.456995   70152 cri.go:89] found id: ""
	I0924 19:50:31.457020   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.457029   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:31.457037   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:31.457048   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:28.478052   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:30.977483   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:29.490310   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:31.990052   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:33.991439   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:32.454421   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:34.456333   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:31.507691   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:31.507725   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:31.521553   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:31.521582   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:31.587673   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:31.587695   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:31.587710   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:31.674153   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:31.674193   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:34.213947   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:34.227779   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:34.227852   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:34.265513   70152 cri.go:89] found id: ""
	I0924 19:50:34.265541   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.265568   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:34.265575   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:34.265632   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:34.305317   70152 cri.go:89] found id: ""
	I0924 19:50:34.305340   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.305348   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:34.305354   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:34.305402   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:34.341144   70152 cri.go:89] found id: ""
	I0924 19:50:34.341168   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.341176   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:34.341183   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:34.341232   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:34.372469   70152 cri.go:89] found id: ""
	I0924 19:50:34.372491   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.372499   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:34.372505   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:34.372551   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:34.408329   70152 cri.go:89] found id: ""
	I0924 19:50:34.408351   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.408360   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:34.408365   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:34.408423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:34.440666   70152 cri.go:89] found id: ""
	I0924 19:50:34.440695   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.440707   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:34.440714   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:34.440782   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:34.475013   70152 cri.go:89] found id: ""
	I0924 19:50:34.475040   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.475047   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:34.475053   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:34.475105   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:34.507051   70152 cri.go:89] found id: ""
	I0924 19:50:34.507077   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.507084   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:34.507092   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:34.507102   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:34.562506   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:34.562549   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:34.575316   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:34.575340   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:34.641903   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:34.641927   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:34.641938   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:34.719868   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:34.719903   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:32.978271   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:35.477581   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:37.479350   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:36.490263   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:38.490795   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:36.953906   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:38.955474   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:37.279465   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:37.291991   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:37.292065   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:37.322097   70152 cri.go:89] found id: ""
	I0924 19:50:37.322123   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.322134   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:37.322141   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:37.322199   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:37.353697   70152 cri.go:89] found id: ""
	I0924 19:50:37.353729   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.353740   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:37.353748   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:37.353807   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:37.385622   70152 cri.go:89] found id: ""
	I0924 19:50:37.385653   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.385664   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:37.385672   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:37.385735   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:37.420972   70152 cri.go:89] found id: ""
	I0924 19:50:37.420995   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.421004   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:37.421012   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:37.421070   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:37.451496   70152 cri.go:89] found id: ""
	I0924 19:50:37.451523   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.451534   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:37.451541   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:37.451619   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:37.486954   70152 cri.go:89] found id: ""
	I0924 19:50:37.486982   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.486992   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:37.487000   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:37.487061   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:37.523068   70152 cri.go:89] found id: ""
	I0924 19:50:37.523089   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.523097   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:37.523105   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:37.523165   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:37.559935   70152 cri.go:89] found id: ""
	I0924 19:50:37.559962   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.559970   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:37.559978   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:37.559988   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:37.597976   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:37.598006   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:37.647577   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:37.647610   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:37.660872   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:37.660901   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:37.728264   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:37.728293   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:37.728307   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:40.308026   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:40.320316   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:40.320373   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:40.357099   70152 cri.go:89] found id: ""
	I0924 19:50:40.357127   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.357137   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:40.357145   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:40.357207   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:40.390676   70152 cri.go:89] found id: ""
	I0924 19:50:40.390703   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.390712   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:40.390717   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:40.390766   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:40.422752   70152 cri.go:89] found id: ""
	I0924 19:50:40.422784   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.422796   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:40.422804   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:40.422887   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:40.457024   70152 cri.go:89] found id: ""
	I0924 19:50:40.457046   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.457054   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:40.457059   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:40.457106   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:40.503120   70152 cri.go:89] found id: ""
	I0924 19:50:40.503149   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.503160   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:40.503168   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:40.503225   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:40.543399   70152 cri.go:89] found id: ""
	I0924 19:50:40.543426   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.543435   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:40.543441   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:40.543487   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:40.577654   70152 cri.go:89] found id: ""
	I0924 19:50:40.577679   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.577690   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:40.577698   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:40.577754   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:40.610097   70152 cri.go:89] found id: ""
	I0924 19:50:40.610120   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.610128   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:40.610136   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:40.610145   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:40.661400   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:40.661436   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:40.674254   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:40.674284   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:40.740319   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:40.740342   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:40.740352   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:40.818666   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:40.818704   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:39.979184   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:41.981561   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:40.491417   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:42.991420   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:41.454480   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:43.456158   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:45.955070   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:43.356693   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:43.369234   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:43.369295   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:43.407933   70152 cri.go:89] found id: ""
	I0924 19:50:43.407960   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.407971   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:43.407978   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:43.408037   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:43.442923   70152 cri.go:89] found id: ""
	I0924 19:50:43.442956   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.442968   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:43.442979   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:43.443029   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:43.478148   70152 cri.go:89] found id: ""
	I0924 19:50:43.478177   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.478189   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:43.478197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:43.478256   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:43.515029   70152 cri.go:89] found id: ""
	I0924 19:50:43.515060   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.515071   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:43.515079   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:43.515144   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:43.551026   70152 cri.go:89] found id: ""
	I0924 19:50:43.551058   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.551070   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:43.551077   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:43.551140   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:43.587155   70152 cri.go:89] found id: ""
	I0924 19:50:43.587188   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.587197   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:43.587205   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:43.587263   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:43.620935   70152 cri.go:89] found id: ""
	I0924 19:50:43.620958   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.620976   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:43.620984   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:43.621045   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:43.654477   70152 cri.go:89] found id: ""
	I0924 19:50:43.654512   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.654523   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:43.654534   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:43.654546   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:43.689352   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:43.689385   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:43.742646   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:43.742683   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:43.755773   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:43.755798   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:43.818546   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:43.818577   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:43.818595   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:46.397466   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:46.410320   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:46.410392   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:46.443003   70152 cri.go:89] found id: ""
	I0924 19:50:46.443029   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.443041   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:46.443049   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:46.443114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:46.484239   70152 cri.go:89] found id: ""
	I0924 19:50:46.484264   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.484274   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:46.484282   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:46.484339   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:43.981787   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:46.478489   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:45.489723   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:47.491171   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:47.955545   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:50.454211   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:46.519192   70152 cri.go:89] found id: ""
	I0924 19:50:46.519221   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.519230   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:46.519236   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:46.519286   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:46.554588   70152 cri.go:89] found id: ""
	I0924 19:50:46.554611   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.554619   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:46.554626   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:46.554685   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:46.586074   70152 cri.go:89] found id: ""
	I0924 19:50:46.586101   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.586110   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:46.586116   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:46.586167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:46.620119   70152 cri.go:89] found id: ""
	I0924 19:50:46.620149   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.620159   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:46.620166   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:46.620226   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:46.653447   70152 cri.go:89] found id: ""
	I0924 19:50:46.653477   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.653488   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:46.653495   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:46.653557   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:46.686079   70152 cri.go:89] found id: ""
	I0924 19:50:46.686105   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.686116   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:46.686127   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:46.686140   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:46.699847   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:46.699891   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:46.766407   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:46.766432   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:46.766447   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:46.846697   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:46.846730   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:46.901551   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:46.901578   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:49.460047   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:49.473516   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:49.473586   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:49.508180   70152 cri.go:89] found id: ""
	I0924 19:50:49.508211   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.508220   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:49.508226   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:49.508289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:49.540891   70152 cri.go:89] found id: ""
	I0924 19:50:49.540920   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.540928   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:49.540934   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:49.540984   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:49.577008   70152 cri.go:89] found id: ""
	I0924 19:50:49.577038   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.577048   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:49.577054   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:49.577132   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:49.615176   70152 cri.go:89] found id: ""
	I0924 19:50:49.615206   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.615216   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:49.615226   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:49.615289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:49.653135   70152 cri.go:89] found id: ""
	I0924 19:50:49.653167   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.653177   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:49.653184   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:49.653250   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:49.691032   70152 cri.go:89] found id: ""
	I0924 19:50:49.691064   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.691074   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:49.691080   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:49.691143   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:49.725243   70152 cri.go:89] found id: ""
	I0924 19:50:49.725274   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.725287   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:49.725294   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:49.725363   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:49.759288   70152 cri.go:89] found id: ""
	I0924 19:50:49.759316   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.759325   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:49.759333   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:49.759345   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:49.831323   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:49.831345   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:49.831362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:49.907302   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:49.907336   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:49.946386   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:49.946424   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:50.002321   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:50.002362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:48.978153   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:51.477442   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:49.991214   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.490034   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.454585   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:54.455120   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.517380   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:52.531613   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:52.531671   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:52.568158   70152 cri.go:89] found id: ""
	I0924 19:50:52.568188   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.568199   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:52.568207   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:52.568258   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:52.606203   70152 cri.go:89] found id: ""
	I0924 19:50:52.606232   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.606241   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:52.606247   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:52.606307   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:52.647180   70152 cri.go:89] found id: ""
	I0924 19:50:52.647206   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.647218   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:52.647226   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:52.647290   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:52.692260   70152 cri.go:89] found id: ""
	I0924 19:50:52.692289   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.692308   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:52.692316   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:52.692382   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:52.745648   70152 cri.go:89] found id: ""
	I0924 19:50:52.745673   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.745684   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:52.745693   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:52.745759   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:52.782429   70152 cri.go:89] found id: ""
	I0924 19:50:52.782451   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.782458   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:52.782463   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:52.782510   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:52.817286   70152 cri.go:89] found id: ""
	I0924 19:50:52.817312   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.817320   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:52.817326   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:52.817387   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:52.851401   70152 cri.go:89] found id: ""
	I0924 19:50:52.851433   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.851442   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:52.851451   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:52.851463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:52.921634   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:52.921661   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:52.921674   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:53.005676   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:53.005710   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:53.042056   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:53.042092   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:53.092871   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:53.092908   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:55.605865   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:55.618713   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:55.618791   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:55.652326   70152 cri.go:89] found id: ""
	I0924 19:50:55.652354   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.652364   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:55.652372   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:55.652434   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:55.686218   70152 cri.go:89] found id: ""
	I0924 19:50:55.686241   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.686249   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:55.686256   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:55.686318   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:55.718678   70152 cri.go:89] found id: ""
	I0924 19:50:55.718704   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.718713   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:55.718720   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:55.718789   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:55.750122   70152 cri.go:89] found id: ""
	I0924 19:50:55.750149   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.750157   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:55.750163   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:55.750213   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:55.780676   70152 cri.go:89] found id: ""
	I0924 19:50:55.780706   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.780717   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:55.780724   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:55.780806   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:55.814742   70152 cri.go:89] found id: ""
	I0924 19:50:55.814771   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.814783   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:55.814790   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:55.814872   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:55.847599   70152 cri.go:89] found id: ""
	I0924 19:50:55.847624   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.847635   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:55.847643   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:55.847708   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:55.882999   70152 cri.go:89] found id: ""
	I0924 19:50:55.883025   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.883034   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:55.883042   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:55.883053   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:55.948795   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:55.948823   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:55.948840   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:56.032946   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:56.032984   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:56.069628   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:56.069657   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:56.118408   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:56.118444   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:53.478043   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:53.979410   69576 pod_ready.go:82] duration metric: took 4m0.007472265s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	E0924 19:50:53.979439   69576 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0924 19:50:53.979449   69576 pod_ready.go:39] duration metric: took 4m5.045187364s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:50:53.979468   69576 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:50:53.979501   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:53.979557   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:54.014613   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:54.014636   69576 cri.go:89] found id: ""
	I0924 19:50:54.014646   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:50:54.014702   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.019232   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:54.019304   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:54.054018   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:54.054042   69576 cri.go:89] found id: ""
	I0924 19:50:54.054050   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:50:54.054111   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.057867   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:54.057937   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:54.090458   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:54.090485   69576 cri.go:89] found id: ""
	I0924 19:50:54.090495   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:50:54.090549   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.094660   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:54.094735   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:54.128438   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:54.128462   69576 cri.go:89] found id: ""
	I0924 19:50:54.128471   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:50:54.128524   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.132209   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:54.132261   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:54.170563   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:54.170584   69576 cri.go:89] found id: ""
	I0924 19:50:54.170591   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:50:54.170640   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.174546   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:54.174615   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:54.211448   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:54.211468   69576 cri.go:89] found id: ""
	I0924 19:50:54.211475   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:50:54.211521   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.215297   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:54.215350   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:54.252930   69576 cri.go:89] found id: ""
	I0924 19:50:54.252955   69576 logs.go:276] 0 containers: []
	W0924 19:50:54.252963   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:54.252969   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:50:54.253023   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:50:54.296111   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:54.296135   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:54.296141   69576 cri.go:89] found id: ""
	I0924 19:50:54.296148   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:50:54.296194   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.299983   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.303864   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:50:54.303899   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:54.340679   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:54.340703   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:54.867298   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:50:54.867333   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:54.908630   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:54.908659   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:54.974028   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:50:54.974059   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:55.034164   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:50:55.034200   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:55.070416   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:50:55.070453   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:55.107831   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:50:55.107857   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:55.143183   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:55.143215   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:55.160049   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:55.160082   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:50:55.267331   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:50:55.267367   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:55.310718   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:50:55.310750   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:55.349628   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:50:55.349656   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:54.990762   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:57.490198   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:56.954742   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:58.955989   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:58.631571   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:58.645369   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:58.645437   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:58.679988   70152 cri.go:89] found id: ""
	I0924 19:50:58.680016   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.680027   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:58.680034   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:58.680095   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:58.717081   70152 cri.go:89] found id: ""
	I0924 19:50:58.717104   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.717114   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:58.717121   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:58.717182   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:58.749093   70152 cri.go:89] found id: ""
	I0924 19:50:58.749115   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.749124   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:58.749129   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:58.749175   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:58.785026   70152 cri.go:89] found id: ""
	I0924 19:50:58.785056   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.785078   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:58.785086   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:58.785174   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:58.821615   70152 cri.go:89] found id: ""
	I0924 19:50:58.821641   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.821651   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:58.821658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:58.821718   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:58.857520   70152 cri.go:89] found id: ""
	I0924 19:50:58.857549   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.857561   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:58.857569   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:58.857638   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:58.892972   70152 cri.go:89] found id: ""
	I0924 19:50:58.892997   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.893008   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:58.893016   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:58.893082   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:58.924716   70152 cri.go:89] found id: ""
	I0924 19:50:58.924743   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.924756   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:58.924764   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:58.924776   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:58.961221   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:58.961249   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:59.013865   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:59.013892   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:59.028436   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:59.028472   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:59.099161   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:59.099187   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:59.099201   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:57.916622   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:57.931591   69576 api_server.go:72] duration metric: took 4m15.73662766s to wait for apiserver process to appear ...
	I0924 19:50:57.931630   69576 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:50:57.931675   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:57.931721   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:57.969570   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:57.969597   69576 cri.go:89] found id: ""
	I0924 19:50:57.969604   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:50:57.969650   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:57.973550   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:57.973602   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:58.015873   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:58.015897   69576 cri.go:89] found id: ""
	I0924 19:50:58.015907   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:50:58.015959   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.020777   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:58.020848   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:58.052771   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:58.052792   69576 cri.go:89] found id: ""
	I0924 19:50:58.052801   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:50:58.052861   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.056640   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:58.056709   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:58.092869   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:58.092888   69576 cri.go:89] found id: ""
	I0924 19:50:58.092894   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:50:58.092949   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.097223   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:58.097293   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:58.131376   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:58.131403   69576 cri.go:89] found id: ""
	I0924 19:50:58.131414   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:50:58.131498   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.135886   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:58.135943   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:58.171962   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:58.171985   69576 cri.go:89] found id: ""
	I0924 19:50:58.171992   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:50:58.172037   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.175714   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:58.175770   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:58.209329   69576 cri.go:89] found id: ""
	I0924 19:50:58.209358   69576 logs.go:276] 0 containers: []
	W0924 19:50:58.209366   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:58.209372   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:50:58.209432   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:50:58.242311   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:58.242331   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:58.242336   69576 cri.go:89] found id: ""
	I0924 19:50:58.242344   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:50:58.242399   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.246774   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.250891   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:58.250909   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:58.736768   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:58.736811   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:50:58.838645   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:50:58.838673   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:58.884334   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:50:58.884366   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:58.933785   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:50:58.933817   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:58.968065   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:50:58.968099   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:59.007212   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:50:59.007238   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:59.067571   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:50:59.067608   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:59.103890   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:50:59.103913   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:59.157991   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:59.158021   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:59.225690   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:59.225724   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:59.239742   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:50:59.239768   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:59.272319   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:50:59.272354   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:01.809089   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:51:01.813972   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0924 19:51:01.815080   69576 api_server.go:141] control plane version: v1.31.1
	I0924 19:51:01.815100   69576 api_server.go:131] duration metric: took 3.883463484s to wait for apiserver health ...
	I0924 19:51:01.815107   69576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:51:01.815127   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:51:01.815166   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:51:01.857140   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:51:01.857164   69576 cri.go:89] found id: ""
	I0924 19:51:01.857174   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:51:01.857235   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.861136   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:51:01.861199   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:51:01.894133   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:51:01.894156   69576 cri.go:89] found id: ""
	I0924 19:51:01.894165   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:51:01.894222   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.898001   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:51:01.898073   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:51:01.933652   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:51:01.933677   69576 cri.go:89] found id: ""
	I0924 19:51:01.933686   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:51:01.933762   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.938487   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:51:01.938549   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:51:01.979500   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:01.979527   69576 cri.go:89] found id: ""
	I0924 19:51:01.979536   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:51:01.979597   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.983762   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:51:01.983827   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:51:02.024402   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:51:02.024427   69576 cri.go:89] found id: ""
	I0924 19:51:02.024436   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:51:02.024501   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.028273   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:51:02.028330   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:51:02.070987   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:51:02.071006   69576 cri.go:89] found id: ""
	I0924 19:51:02.071013   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:51:02.071058   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.076176   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:51:02.076244   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:51:02.119921   69576 cri.go:89] found id: ""
	I0924 19:51:02.119950   69576 logs.go:276] 0 containers: []
	W0924 19:51:02.119960   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:51:02.119967   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:51:02.120026   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:51:02.156531   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:51:02.156562   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:51:02.156568   69576 cri.go:89] found id: ""
	I0924 19:51:02.156577   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:51:02.156643   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.161262   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.165581   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:51:02.165602   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:51:02.216300   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:51:02.216327   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:51:02.262879   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:51:02.262909   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:59.490689   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:01.992004   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:02.984419   69904 pod_ready.go:82] duration metric: took 4m0.00033045s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" ...
	E0924 19:51:02.984461   69904 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 19:51:02.984478   69904 pod_ready.go:39] duration metric: took 4m13.271652912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:02.984508   69904 kubeadm.go:597] duration metric: took 4m21.208228185s to restartPrimaryControlPlane
	W0924 19:51:02.984576   69904 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:02.984610   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:02.643876   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:51:02.643917   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:51:02.680131   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:51:02.680170   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:51:02.693192   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:51:02.693225   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:51:02.788649   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:51:02.788678   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:51:02.836539   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:51:02.836571   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:51:02.889363   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:51:02.889393   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:02.925388   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:51:02.925416   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:51:02.962512   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:51:02.962545   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:51:02.999119   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:51:02.999144   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:51:03.072647   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:51:03.072683   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:51:05.629114   69576 system_pods.go:59] 8 kube-system pods found
	I0924 19:51:05.629141   69576 system_pods.go:61] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running
	I0924 19:51:05.629145   69576 system_pods.go:61] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running
	I0924 19:51:05.629149   69576 system_pods.go:61] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running
	I0924 19:51:05.629153   69576 system_pods.go:61] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running
	I0924 19:51:05.629156   69576 system_pods.go:61] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running
	I0924 19:51:05.629159   69576 system_pods.go:61] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running
	I0924 19:51:05.629164   69576 system_pods.go:61] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:05.629169   69576 system_pods.go:61] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:51:05.629177   69576 system_pods.go:74] duration metric: took 3.814063168s to wait for pod list to return data ...
	I0924 19:51:05.629183   69576 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:51:05.632105   69576 default_sa.go:45] found service account: "default"
	I0924 19:51:05.632126   69576 default_sa.go:55] duration metric: took 2.937635ms for default service account to be created ...
	I0924 19:51:05.632133   69576 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:51:05.637121   69576 system_pods.go:86] 8 kube-system pods found
	I0924 19:51:05.637152   69576 system_pods.go:89] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running
	I0924 19:51:05.637160   69576 system_pods.go:89] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running
	I0924 19:51:05.637167   69576 system_pods.go:89] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running
	I0924 19:51:05.637174   69576 system_pods.go:89] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running
	I0924 19:51:05.637179   69576 system_pods.go:89] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running
	I0924 19:51:05.637185   69576 system_pods.go:89] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running
	I0924 19:51:05.637196   69576 system_pods.go:89] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:05.637205   69576 system_pods.go:89] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:51:05.637214   69576 system_pods.go:126] duration metric: took 5.075319ms to wait for k8s-apps to be running ...
	I0924 19:51:05.637222   69576 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:51:05.637264   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:05.654706   69576 system_svc.go:56] duration metric: took 17.472783ms WaitForService to wait for kubelet
	I0924 19:51:05.654809   69576 kubeadm.go:582] duration metric: took 4m23.459841471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:51:05.654865   69576 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:51:05.658334   69576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:51:05.658353   69576 node_conditions.go:123] node cpu capacity is 2
	I0924 19:51:05.658363   69576 node_conditions.go:105] duration metric: took 3.492035ms to run NodePressure ...
	I0924 19:51:05.658373   69576 start.go:241] waiting for startup goroutines ...
	I0924 19:51:05.658379   69576 start.go:246] waiting for cluster config update ...
	I0924 19:51:05.658389   69576 start.go:255] writing updated cluster config ...
	I0924 19:51:05.658691   69576 ssh_runner.go:195] Run: rm -f paused
	I0924 19:51:05.706059   69576 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:51:05.708303   69576 out.go:177] * Done! kubectl is now configured to use "no-preload-965745" cluster and "default" namespace by default
	I0924 19:51:01.454367   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:03.954114   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:05.955269   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:01.696298   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:01.709055   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:51:01.709132   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:51:01.741383   70152 cri.go:89] found id: ""
	I0924 19:51:01.741409   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.741416   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:51:01.741422   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:51:01.741476   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:51:01.773123   70152 cri.go:89] found id: ""
	I0924 19:51:01.773148   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.773156   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:51:01.773162   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:51:01.773221   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:51:01.806752   70152 cri.go:89] found id: ""
	I0924 19:51:01.806784   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.806792   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:51:01.806798   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:51:01.806928   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:51:01.851739   70152 cri.go:89] found id: ""
	I0924 19:51:01.851769   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.851780   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:51:01.851786   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:51:01.851850   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:51:01.885163   70152 cri.go:89] found id: ""
	I0924 19:51:01.885192   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.885201   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:51:01.885207   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:51:01.885255   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:51:01.918891   70152 cri.go:89] found id: ""
	I0924 19:51:01.918918   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.918929   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:51:01.918936   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:51:01.918996   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:51:01.953367   70152 cri.go:89] found id: ""
	I0924 19:51:01.953394   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.953403   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:51:01.953411   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:51:01.953468   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:51:01.993937   70152 cri.go:89] found id: ""
	I0924 19:51:01.993961   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.993970   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:51:01.993981   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:51:01.993993   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:51:02.049467   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:51:02.049503   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:51:02.065074   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:51:02.065117   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:51:02.141811   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:51:02.141837   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:51:02.141852   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:51:02.224507   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:51:02.224534   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:51:04.766806   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:04.779518   70152 kubeadm.go:597] duration metric: took 4m3.458373s to restartPrimaryControlPlane
	W0924 19:51:04.779588   70152 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:04.779617   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:09.285959   70152 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.506320559s)
	I0924 19:51:09.286033   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:09.299784   70152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:51:09.311238   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:51:09.320580   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:51:09.320603   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:51:09.320658   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:51:09.329216   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:51:09.329281   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:51:09.337964   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:51:09.346324   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:51:09.346383   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:51:09.354788   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:51:09.363191   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:51:09.363249   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:51:09.372141   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:51:09.380290   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:51:09.380344   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:51:09.388996   70152 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:51:09.456034   70152 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:51:09.456144   70152 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:51:09.585473   70152 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:51:09.585697   70152 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:51:09.585935   70152 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:51:09.749623   70152 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:51:09.751504   70152 out.go:235]   - Generating certificates and keys ...
	I0924 19:51:09.751599   70152 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:51:09.751702   70152 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:51:09.751845   70152 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:51:09.751955   70152 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:51:09.752059   70152 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:51:09.752137   70152 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:51:09.752237   70152 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:51:09.752332   70152 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:51:09.752430   70152 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:51:09.752536   70152 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:51:09.752602   70152 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:51:09.752683   70152 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:51:09.881554   70152 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:51:10.269203   70152 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:51:10.518480   70152 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:51:10.712060   70152 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:51:10.727454   70152 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:51:10.728411   70152 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:51:10.728478   70152 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:51:10.849448   70152 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:51:08.454350   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:10.455005   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:10.851100   70152 out.go:235]   - Booting up control plane ...
	I0924 19:51:10.851237   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:51:10.860097   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:51:10.860987   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:51:10.861716   70152 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:51:10.863845   70152 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:51:12.954243   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:14.957843   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:17.453731   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:19.453953   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:21.454522   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:23.455166   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:25.953843   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:29.077330   69904 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.092691625s)
	I0924 19:51:29.077484   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:29.091493   69904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:51:29.101026   69904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:51:29.109749   69904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:51:29.109768   69904 kubeadm.go:157] found existing configuration files:
	
	I0924 19:51:29.109814   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 19:51:29.118177   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:51:29.118225   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:51:29.126963   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 19:51:29.135458   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:51:29.135514   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:51:29.144373   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 19:51:29.153026   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:51:29.153104   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:51:29.162719   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 19:51:29.171667   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:51:29.171722   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:51:29.180370   69904 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:51:29.220747   69904 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 19:51:29.220873   69904 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:51:29.319144   69904 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:51:29.319289   69904 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:51:29.319416   69904 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 19:51:29.328410   69904 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:51:29.329855   69904 out.go:235]   - Generating certificates and keys ...
	I0924 19:51:29.329956   69904 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:51:29.330042   69904 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:51:29.330148   69904 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:51:29.330251   69904 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:51:29.330369   69904 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:51:29.330451   69904 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:51:29.330557   69904 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:51:29.330668   69904 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:51:29.330772   69904 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:51:29.330900   69904 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:51:29.330966   69904 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:51:29.331042   69904 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:51:29.504958   69904 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:51:29.642370   69904 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 19:51:29.735556   69904 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:51:29.870700   69904 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:51:30.048778   69904 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:51:30.049481   69904 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:51:30.052686   69904 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:51:27.954118   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:29.955223   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:30.054684   69904 out.go:235]   - Booting up control plane ...
	I0924 19:51:30.054786   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:51:30.054935   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:51:30.055710   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:51:30.073679   69904 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:51:30.079375   69904 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:51:30.079437   69904 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:51:30.208692   69904 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 19:51:30.208799   69904 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 19:51:31.210485   69904 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001878491s
	I0924 19:51:31.210602   69904 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 19:51:35.712648   69904 kubeadm.go:310] [api-check] The API server is healthy after 4.501942024s
	I0924 19:51:35.726167   69904 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 19:51:35.745115   69904 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 19:51:35.778631   69904 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 19:51:35.778910   69904 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-093771 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 19:51:35.793809   69904 kubeadm.go:310] [bootstrap-token] Using token: joc3du.4csctmt42s6jz0an
	I0924 19:51:31.955402   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:33.956250   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:35.949705   69408 pod_ready.go:82] duration metric: took 4m0.001155579s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" ...
	E0924 19:51:35.949733   69408 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 19:51:35.949755   69408 pod_ready.go:39] duration metric: took 4m8.530526042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:35.949787   69408 kubeadm.go:597] duration metric: took 4m16.768464943s to restartPrimaryControlPlane
	W0924 19:51:35.949874   69408 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:35.949908   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:35.795255   69904 out.go:235]   - Configuring RBAC rules ...
	I0924 19:51:35.795389   69904 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 19:51:35.800809   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 19:51:35.819531   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 19:51:35.825453   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 19:51:35.831439   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 19:51:35.835651   69904 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 19:51:36.119903   69904 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 19:51:36.554891   69904 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 19:51:37.120103   69904 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 19:51:37.121012   69904 kubeadm.go:310] 
	I0924 19:51:37.121125   69904 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 19:51:37.121146   69904 kubeadm.go:310] 
	I0924 19:51:37.121242   69904 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 19:51:37.121260   69904 kubeadm.go:310] 
	I0924 19:51:37.121309   69904 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 19:51:37.121403   69904 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 19:51:37.121469   69904 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 19:51:37.121477   69904 kubeadm.go:310] 
	I0924 19:51:37.121557   69904 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 19:51:37.121578   69904 kubeadm.go:310] 
	I0924 19:51:37.121659   69904 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 19:51:37.121674   69904 kubeadm.go:310] 
	I0924 19:51:37.121765   69904 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 19:51:37.121891   69904 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 19:51:37.122002   69904 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 19:51:37.122013   69904 kubeadm.go:310] 
	I0924 19:51:37.122122   69904 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 19:51:37.122239   69904 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 19:51:37.122247   69904 kubeadm.go:310] 
	I0924 19:51:37.122333   69904 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token joc3du.4csctmt42s6jz0an \
	I0924 19:51:37.122470   69904 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 19:51:37.122509   69904 kubeadm.go:310] 	--control-plane 
	I0924 19:51:37.122520   69904 kubeadm.go:310] 
	I0924 19:51:37.122598   69904 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 19:51:37.122606   69904 kubeadm.go:310] 
	I0924 19:51:37.122720   69904 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token joc3du.4csctmt42s6jz0an \
	I0924 19:51:37.122884   69904 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 19:51:37.124443   69904 kubeadm.go:310] W0924 19:51:29.206815    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:51:37.124730   69904 kubeadm.go:310] W0924 19:51:29.207506    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:51:37.124872   69904 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:51:37.124908   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:51:37.124921   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:51:37.126897   69904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:51:37.128457   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:51:37.138516   69904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:51:37.154747   69904 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:51:37.154812   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:37.154860   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-093771 minikube.k8s.io/updated_at=2024_09_24T19_51_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=default-k8s-diff-port-093771 minikube.k8s.io/primary=true
	I0924 19:51:37.178892   69904 ops.go:34] apiserver oom_adj: -16
	I0924 19:51:37.364019   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:37.864960   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:38.364223   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:38.864189   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:39.365144   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:39.864326   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:40.364143   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:40.864333   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:41.364236   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:41.461496   69904 kubeadm.go:1113] duration metric: took 4.30674912s to wait for elevateKubeSystemPrivileges
	I0924 19:51:41.461536   69904 kubeadm.go:394] duration metric: took 4m59.728895745s to StartCluster
	I0924 19:51:41.461557   69904 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:51:41.461654   69904 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:51:41.464153   69904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:51:41.464416   69904 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:51:41.464620   69904 config.go:182] Loaded profile config "default-k8s-diff-port-093771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:51:41.464553   69904 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:51:41.464699   69904 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464718   69904 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-093771"
	I0924 19:51:41.464722   69904 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464753   69904 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-093771"
	I0924 19:51:41.464753   69904 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464774   69904 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-093771"
	W0924 19:51:41.464786   69904 addons.go:243] addon metrics-server should already be in state true
	I0924 19:51:41.464824   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	W0924 19:51:41.464729   69904 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:51:41.464894   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	I0924 19:51:41.465192   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465211   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465211   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465242   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.465280   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.465229   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.466016   69904 out.go:177] * Verifying Kubernetes components...
	I0924 19:51:41.467370   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:51:41.480937   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0924 19:51:41.481105   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46867
	I0924 19:51:41.481377   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.481596   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.482008   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.482032   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.482119   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.482139   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.482420   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.482453   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.482636   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.483038   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.483079   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.484535   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
	I0924 19:51:41.486427   69904 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-093771"
	W0924 19:51:41.486572   69904 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:51:41.486612   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	I0924 19:51:41.486941   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.487097   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.487145   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.487517   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.487536   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.487866   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.488447   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.488493   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.502934   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0924 19:51:41.503244   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45593
	I0924 19:51:41.503446   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.503810   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.503904   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.503920   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.504266   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.504281   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.504327   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.504742   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.504768   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.505104   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.505295   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.508446   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I0924 19:51:41.508449   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.508839   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.509365   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.509388   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.509739   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.509898   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.510390   69904 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:51:41.511622   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.511801   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:51:41.511818   69904 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:51:41.511838   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.513430   69904 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:51:41.514819   69904 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:51:41.514853   69904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:51:41.514871   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.515131   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.515838   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.515903   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.515983   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.516096   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.516270   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.516423   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.518636   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.519167   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.519192   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.519477   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.519709   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.519885   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.520037   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.522168   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0924 19:51:41.522719   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.523336   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.523360   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.523663   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.523857   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.525469   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.525702   69904 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:51:41.525718   69904 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:51:41.525738   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.528613   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.529122   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.529142   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.529384   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.529572   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.529764   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.529913   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.666584   69904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:51:41.685485   69904 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-093771" to be "Ready" ...
	I0924 19:51:41.701712   69904 node_ready.go:49] node "default-k8s-diff-port-093771" has status "Ready":"True"
	I0924 19:51:41.701735   69904 node_ready.go:38] duration metric: took 16.218729ms for node "default-k8s-diff-port-093771" to be "Ready" ...
	I0924 19:51:41.701745   69904 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:41.732271   69904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:41.759846   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:51:41.850208   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:51:41.854353   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:51:41.854372   69904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:51:41.884080   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:51:41.884109   69904 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:51:41.924130   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:51:41.924161   69904 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:51:41.956667   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.956699   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.957030   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.957044   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.957051   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.957058   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.957319   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.957378   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.957353   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:41.964614   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.964632   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.964934   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.964953   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.988158   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:51:42.871520   69904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.021277105s)
	I0924 19:51:42.871575   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:42.871586   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:42.871871   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:42.871892   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:42.871905   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:42.871918   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:42.872184   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:42.872237   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:42.872259   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:43.106973   69904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.118760493s)
	I0924 19:51:43.107032   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:43.107047   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:43.107342   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:43.107375   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:43.107389   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:43.107403   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:43.107414   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:43.107682   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:43.107697   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:43.107715   69904 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-093771"
	I0924 19:51:43.109818   69904 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0924 19:51:43.111542   69904 addons.go:510] duration metric: took 1.646997004s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0924 19:51:43.738989   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:45.738584   69904 pod_ready.go:93] pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:45.738610   69904 pod_ready.go:82] duration metric: took 4.006305736s for pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:45.738622   69904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:47.746429   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:50.864744   70152 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:51:50.865098   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:51:50.865318   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:51:50.245581   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:51.745840   69904 pod_ready.go:93] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.745870   69904 pod_ready.go:82] duration metric: took 6.007240203s for pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.745888   69904 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.754529   69904 pod_ready.go:93] pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.754556   69904 pod_ready.go:82] duration metric: took 8.660403ms for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.754569   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.764561   69904 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.764589   69904 pod_ready.go:82] duration metric: took 10.010012ms for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.764603   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.771177   69904 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.771205   69904 pod_ready.go:82] duration metric: took 6.593267ms for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.771218   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5rw7b" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.775929   69904 pod_ready.go:93] pod "kube-proxy-5rw7b" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.775952   69904 pod_ready.go:82] duration metric: took 4.726185ms for pod "kube-proxy-5rw7b" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.775964   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:52.143343   69904 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:52.143367   69904 pod_ready.go:82] duration metric: took 367.395759ms for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:52.143375   69904 pod_ready.go:39] duration metric: took 10.441621626s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:52.143388   69904 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:51:52.143433   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:52.157316   69904 api_server.go:72] duration metric: took 10.69286406s to wait for apiserver process to appear ...
	I0924 19:51:52.157344   69904 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:51:52.157363   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:51:52.162550   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 200:
	ok
	I0924 19:51:52.163431   69904 api_server.go:141] control plane version: v1.31.1
	I0924 19:51:52.163453   69904 api_server.go:131] duration metric: took 6.102223ms to wait for apiserver health ...
	I0924 19:51:52.163465   69904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:51:52.346998   69904 system_pods.go:59] 9 kube-system pods found
	I0924 19:51:52.347026   69904 system_pods.go:61] "coredns-7c65d6cfc9-87t62" [b4be73eb-defb-4cc1-84f7-d34dccab4a2c] Running
	I0924 19:51:52.347031   69904 system_pods.go:61] "coredns-7c65d6cfc9-nzssp" [ecf276cd-9aa0-4a0b-81b6-da38271d10ed] Running
	I0924 19:51:52.347036   69904 system_pods.go:61] "etcd-default-k8s-diff-port-093771" [809f2c90-7cfc-4c77-a078-7883a7c6f2ac] Running
	I0924 19:51:52.347039   69904 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-093771" [2d297125-52bd-4c17-ab57-89911bb046e7] Running
	I0924 19:51:52.347043   69904 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-093771" [9e3c3d16-5e5d-4ebf-9ade-24cb40b9e836] Running
	I0924 19:51:52.347046   69904 system_pods.go:61] "kube-proxy-5rw7b" [f2916b6c-1a6f-4766-8543-0d846f559710] Running
	I0924 19:51:52.347049   69904 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-093771" [d1db09ad-d2e9-4453-b354-379bbb4081bf] Running
	I0924 19:51:52.347055   69904 system_pods.go:61] "metrics-server-6867b74b74-gnlkd" [a3b6c4f7-47e1-48a3-adff-1690db5cea3b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:52.347059   69904 system_pods.go:61] "storage-provisioner" [591605b2-de7e-4dc1-903b-f8102ccc3770] Running
	I0924 19:51:52.347067   69904 system_pods.go:74] duration metric: took 183.595946ms to wait for pod list to return data ...
	I0924 19:51:52.347074   69904 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:51:52.542476   69904 default_sa.go:45] found service account: "default"
	I0924 19:51:52.542504   69904 default_sa.go:55] duration metric: took 195.421838ms for default service account to be created ...
	I0924 19:51:52.542514   69904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:51:52.747902   69904 system_pods.go:86] 9 kube-system pods found
	I0924 19:51:52.747936   69904 system_pods.go:89] "coredns-7c65d6cfc9-87t62" [b4be73eb-defb-4cc1-84f7-d34dccab4a2c] Running
	I0924 19:51:52.747943   69904 system_pods.go:89] "coredns-7c65d6cfc9-nzssp" [ecf276cd-9aa0-4a0b-81b6-da38271d10ed] Running
	I0924 19:51:52.747950   69904 system_pods.go:89] "etcd-default-k8s-diff-port-093771" [809f2c90-7cfc-4c77-a078-7883a7c6f2ac] Running
	I0924 19:51:52.747955   69904 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-093771" [2d297125-52bd-4c17-ab57-89911bb046e7] Running
	I0924 19:51:52.747961   69904 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-093771" [9e3c3d16-5e5d-4ebf-9ade-24cb40b9e836] Running
	I0924 19:51:52.747966   69904 system_pods.go:89] "kube-proxy-5rw7b" [f2916b6c-1a6f-4766-8543-0d846f559710] Running
	I0924 19:51:52.747971   69904 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-093771" [d1db09ad-d2e9-4453-b354-379bbb4081bf] Running
	I0924 19:51:52.747981   69904 system_pods.go:89] "metrics-server-6867b74b74-gnlkd" [a3b6c4f7-47e1-48a3-adff-1690db5cea3b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:52.747988   69904 system_pods.go:89] "storage-provisioner" [591605b2-de7e-4dc1-903b-f8102ccc3770] Running
	I0924 19:51:52.748002   69904 system_pods.go:126] duration metric: took 205.481542ms to wait for k8s-apps to be running ...
	I0924 19:51:52.748010   69904 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:51:52.748069   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:52.763092   69904 system_svc.go:56] duration metric: took 15.071727ms WaitForService to wait for kubelet
	I0924 19:51:52.763121   69904 kubeadm.go:582] duration metric: took 11.298674484s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:51:52.763141   69904 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:51:52.942890   69904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:51:52.942915   69904 node_conditions.go:123] node cpu capacity is 2
	I0924 19:51:52.942925   69904 node_conditions.go:105] duration metric: took 179.779826ms to run NodePressure ...
	I0924 19:51:52.942935   69904 start.go:241] waiting for startup goroutines ...
	I0924 19:51:52.942941   69904 start.go:246] waiting for cluster config update ...
	I0924 19:51:52.942951   69904 start.go:255] writing updated cluster config ...
	I0924 19:51:52.943201   69904 ssh_runner.go:195] Run: rm -f paused
	I0924 19:51:52.992952   69904 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:51:52.995076   69904 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-093771" cluster and "default" namespace by default
	I0924 19:51:55.865870   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:51:55.866074   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:52:02.110619   69408 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.160686078s)
	I0924 19:52:02.110702   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:52:02.124706   69408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:52:02.133983   69408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:52:02.142956   69408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:52:02.142980   69408 kubeadm.go:157] found existing configuration files:
	
	I0924 19:52:02.143027   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:52:02.151037   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:52:02.151101   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:52:02.160469   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:52:02.168827   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:52:02.168889   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:52:02.177644   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:52:02.186999   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:52:02.187064   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:52:02.195935   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:52:02.204688   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:52:02.204763   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:52:02.213564   69408 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:52:02.259426   69408 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 19:52:02.259587   69408 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:52:02.355605   69408 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:52:02.355774   69408 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:52:02.355928   69408 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 19:52:02.363355   69408 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:52:02.365307   69408 out.go:235]   - Generating certificates and keys ...
	I0924 19:52:02.365423   69408 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:52:02.365526   69408 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:52:02.365688   69408 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:52:02.365773   69408 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:52:02.365879   69408 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:52:02.365955   69408 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:52:02.366061   69408 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:52:02.366149   69408 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:52:02.366257   69408 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:52:02.366362   69408 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:52:02.366417   69408 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:52:02.366502   69408 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:52:02.551857   69408 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:52:02.836819   69408 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 19:52:03.096479   69408 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:52:03.209489   69408 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:52:03.274701   69408 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:52:03.275214   69408 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:52:03.277917   69408 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:52:03.279804   69408 out.go:235]   - Booting up control plane ...
	I0924 19:52:03.279909   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:52:03.280022   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:52:03.280130   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:52:03.297451   69408 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:52:03.304789   69408 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:52:03.304840   69408 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:52:03.423280   69408 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 19:52:03.423394   69408 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 19:52:03.925128   69408 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.985266ms
	I0924 19:52:03.925262   69408 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 19:52:05.866171   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:52:05.866441   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:52:08.429070   69408 kubeadm.go:310] [api-check] The API server is healthy after 4.502084393s
	I0924 19:52:08.439108   69408 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 19:52:08.455261   69408 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 19:52:08.479883   69408 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 19:52:08.480145   69408 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-311319 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 19:52:08.490294   69408 kubeadm.go:310] [bootstrap-token] Using token: ugx0qk.6i7lm67tfu0foozy
	I0924 19:52:08.491600   69408 out.go:235]   - Configuring RBAC rules ...
	I0924 19:52:08.491741   69408 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 19:52:08.496142   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 19:52:08.502704   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 19:52:08.508752   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 19:52:08.512088   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 19:52:08.515855   69408 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 19:52:08.837286   69408 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 19:52:09.278937   69408 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 19:52:09.835442   69408 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 19:52:09.836889   69408 kubeadm.go:310] 
	I0924 19:52:09.836953   69408 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 19:52:09.836967   69408 kubeadm.go:310] 
	I0924 19:52:09.837040   69408 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 19:52:09.837048   69408 kubeadm.go:310] 
	I0924 19:52:09.837068   69408 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 19:52:09.837117   69408 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 19:52:09.837167   69408 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 19:52:09.837174   69408 kubeadm.go:310] 
	I0924 19:52:09.837238   69408 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 19:52:09.837246   69408 kubeadm.go:310] 
	I0924 19:52:09.837297   69408 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 19:52:09.837307   69408 kubeadm.go:310] 
	I0924 19:52:09.837371   69408 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 19:52:09.837490   69408 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 19:52:09.837611   69408 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 19:52:09.837630   69408 kubeadm.go:310] 
	I0924 19:52:09.837706   69408 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 19:52:09.837774   69408 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 19:52:09.837780   69408 kubeadm.go:310] 
	I0924 19:52:09.837851   69408 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ugx0qk.6i7lm67tfu0foozy \
	I0924 19:52:09.837951   69408 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 19:52:09.837979   69408 kubeadm.go:310] 	--control-plane 
	I0924 19:52:09.837992   69408 kubeadm.go:310] 
	I0924 19:52:09.838087   69408 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 19:52:09.838100   69408 kubeadm.go:310] 
	I0924 19:52:09.838204   69408 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ugx0qk.6i7lm67tfu0foozy \
	I0924 19:52:09.838325   69408 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 19:52:09.839629   69408 kubeadm.go:310] W0924 19:52:02.243473    2529 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:52:09.839919   69408 kubeadm.go:310] W0924 19:52:02.244730    2529 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:52:09.840040   69408 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:52:09.840056   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:52:09.840067   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:52:09.842039   69408 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:52:09.843562   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:52:09.855620   69408 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:52:09.873291   69408 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:52:09.873381   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:09.873401   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-311319 minikube.k8s.io/updated_at=2024_09_24T19_52_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=embed-certs-311319 minikube.k8s.io/primary=true
	I0924 19:52:09.898351   69408 ops.go:34] apiserver oom_adj: -16
	I0924 19:52:10.043641   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:10.544445   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:11.043725   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:11.543862   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:12.043769   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:12.543723   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:13.044577   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:13.544545   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.043885   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.544454   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.663140   69408 kubeadm.go:1113] duration metric: took 4.789827964s to wait for elevateKubeSystemPrivileges
	I0924 19:52:14.663181   69408 kubeadm.go:394] duration metric: took 4m55.527467072s to StartCluster
	I0924 19:52:14.663202   69408 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:52:14.663295   69408 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:52:14.665852   69408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:52:14.666123   69408 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:52:14.666181   69408 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:52:14.666281   69408 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-311319"
	I0924 19:52:14.666302   69408 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-311319"
	I0924 19:52:14.666298   69408 addons.go:69] Setting default-storageclass=true in profile "embed-certs-311319"
	W0924 19:52:14.666315   69408 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:52:14.666324   69408 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-311319"
	I0924 19:52:14.666347   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.666357   69408 config.go:182] Loaded profile config "embed-certs-311319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:52:14.666407   69408 addons.go:69] Setting metrics-server=true in profile "embed-certs-311319"
	I0924 19:52:14.666424   69408 addons.go:234] Setting addon metrics-server=true in "embed-certs-311319"
	W0924 19:52:14.666432   69408 addons.go:243] addon metrics-server should already be in state true
	I0924 19:52:14.666462   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.666762   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666766   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666803   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.666863   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.666899   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666937   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.667748   69408 out.go:177] * Verifying Kubernetes components...
	I0924 19:52:14.669166   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:52:14.684612   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39209
	I0924 19:52:14.684876   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0924 19:52:14.685146   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.685266   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.685645   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.685662   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.685689   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35475
	I0924 19:52:14.685786   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.685806   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.686027   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.686034   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.686125   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.686517   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.686559   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.686617   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.686617   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.686638   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.686666   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.687118   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.687348   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.690029   69408 addons.go:234] Setting addon default-storageclass=true in "embed-certs-311319"
	W0924 19:52:14.690047   69408 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:52:14.690067   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.690357   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.690389   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.705119   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0924 19:52:14.705473   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42153
	I0924 19:52:14.705613   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.705983   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.706260   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.706283   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.706433   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.706458   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.706673   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.706793   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.706937   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.706989   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.708118   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I0924 19:52:14.708552   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.708751   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.709269   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.709288   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.709312   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.709894   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.710364   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.710405   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.710737   69408 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:52:14.710846   69408 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:52:14.711925   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:52:14.711937   69408 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:52:14.711951   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.712493   69408 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:52:14.712506   69408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:52:14.712521   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.716365   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716390   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.716402   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716511   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.716639   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.716738   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.716763   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716820   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.717468   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.717490   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.717691   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.717856   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.718038   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.718356   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.729081   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43771
	I0924 19:52:14.729516   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.730022   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.730040   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.730363   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.730541   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.732272   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.732526   69408 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:52:14.732545   69408 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:52:14.732564   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.735618   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.736196   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.736220   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.736269   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.736499   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.736675   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.736823   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.869932   69408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:52:14.906644   69408 node_ready.go:35] waiting up to 6m0s for node "embed-certs-311319" to be "Ready" ...
	I0924 19:52:14.914856   69408 node_ready.go:49] node "embed-certs-311319" has status "Ready":"True"
	I0924 19:52:14.914884   69408 node_ready.go:38] duration metric: took 8.205314ms for node "embed-certs-311319" to be "Ready" ...
	I0924 19:52:14.914893   69408 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:52:14.919969   69408 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:15.014078   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:52:15.014101   69408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:52:15.052737   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:52:15.064467   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:52:15.065858   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:52:15.065877   69408 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:52:15.137882   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:52:15.137902   69408 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:52:15.222147   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:52:15.331245   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.331279   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.331622   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.331647   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.331656   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.331664   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.331624   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:15.331894   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.331910   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.331898   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:15.339921   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.339937   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.340159   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.340203   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.340235   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.048748   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.048769   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.049094   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.049133   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.049144   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.049152   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.049159   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.049489   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.049524   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.049544   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.149500   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.149522   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.149817   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.149877   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.149903   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.149917   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.149926   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.150145   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.150159   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.150182   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.150191   69408 addons.go:475] Verifying addon metrics-server=true in "embed-certs-311319"
	I0924 19:52:16.151648   69408 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0924 19:52:16.153171   69408 addons.go:510] duration metric: took 1.486993032s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0924 19:52:16.925437   69408 pod_ready.go:103] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"False"
	I0924 19:52:18.926343   69408 pod_ready.go:103] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"False"
	I0924 19:52:20.928047   69408 pod_ready.go:93] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.928068   69408 pod_ready.go:82] duration metric: took 6.008077725s for pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.928076   69408 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.933100   69408 pod_ready.go:93] pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.933119   69408 pod_ready.go:82] duration metric: took 5.035858ms for pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.933127   69408 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.938200   69408 pod_ready.go:93] pod "etcd-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.938215   69408 pod_ready.go:82] duration metric: took 5.082837ms for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.938223   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.942124   69408 pod_ready.go:93] pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.942143   69408 pod_ready.go:82] duration metric: took 3.912415ms for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.942154   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.946306   69408 pod_ready.go:93] pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.946323   69408 pod_ready.go:82] duration metric: took 4.162782ms for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.946330   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h42s7" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.323768   69408 pod_ready.go:93] pod "kube-proxy-h42s7" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:21.323794   69408 pod_ready.go:82] duration metric: took 377.456852ms for pod "kube-proxy-h42s7" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.323806   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.723714   69408 pod_ready.go:93] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:21.723742   69408 pod_ready.go:82] duration metric: took 399.928048ms for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.723752   69408 pod_ready.go:39] duration metric: took 6.808848583s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:52:21.723769   69408 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:52:21.723835   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:52:21.738273   69408 api_server.go:72] duration metric: took 7.072120167s to wait for apiserver process to appear ...
	I0924 19:52:21.738301   69408 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:52:21.738353   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:52:21.743391   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 200:
	ok
	I0924 19:52:21.744346   69408 api_server.go:141] control plane version: v1.31.1
	I0924 19:52:21.744361   69408 api_server.go:131] duration metric: took 6.053884ms to wait for apiserver health ...
	I0924 19:52:21.744368   69408 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:52:21.926453   69408 system_pods.go:59] 9 kube-system pods found
	I0924 19:52:21.926485   69408 system_pods.go:61] "coredns-7c65d6cfc9-jsvdk" [da741136-c1ce-436f-9df0-e447b067265f] Running
	I0924 19:52:21.926493   69408 system_pods.go:61] "coredns-7c65d6cfc9-qgfvt" [7e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74] Running
	I0924 19:52:21.926499   69408 system_pods.go:61] "etcd-embed-certs-311319" [543c64c6-453b-4d42-b6a8-5b25577b3b8a] Running
	I0924 19:52:21.926505   69408 system_pods.go:61] "kube-apiserver-embed-certs-311319" [c1cd4c65-07a6-4d53-8f1d-438a8efdcdfa] Running
	I0924 19:52:21.926510   69408 system_pods.go:61] "kube-controller-manager-embed-certs-311319" [eece1531-5f24-4853-9e91-ca29558f3b9d] Running
	I0924 19:52:21.926517   69408 system_pods.go:61] "kube-proxy-h42s7" [76930a49-6a8a-4d02-84b8-8e26f3196ac3] Running
	I0924 19:52:21.926522   69408 system_pods.go:61] "kube-scheduler-embed-certs-311319" [22d20361-552d-4443-bec2-e406919d2966] Running
	I0924 19:52:21.926531   69408 system_pods.go:61] "metrics-server-6867b74b74-xnwm4" [dc64f26b-e4a6-4692-83d5-e6c794c1b130] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:52:21.926540   69408 system_pods.go:61] "storage-provisioner" [766bdfe2-684a-47de-94fd-088795b60e2b] Running
	I0924 19:52:21.926551   69408 system_pods.go:74] duration metric: took 182.176397ms to wait for pod list to return data ...
	I0924 19:52:21.926562   69408 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:52:22.123871   69408 default_sa.go:45] found service account: "default"
	I0924 19:52:22.123896   69408 default_sa.go:55] duration metric: took 197.328478ms for default service account to be created ...
	I0924 19:52:22.123911   69408 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:52:22.327585   69408 system_pods.go:86] 9 kube-system pods found
	I0924 19:52:22.327616   69408 system_pods.go:89] "coredns-7c65d6cfc9-jsvdk" [da741136-c1ce-436f-9df0-e447b067265f] Running
	I0924 19:52:22.327625   69408 system_pods.go:89] "coredns-7c65d6cfc9-qgfvt" [7e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74] Running
	I0924 19:52:22.327630   69408 system_pods.go:89] "etcd-embed-certs-311319" [543c64c6-453b-4d42-b6a8-5b25577b3b8a] Running
	I0924 19:52:22.327636   69408 system_pods.go:89] "kube-apiserver-embed-certs-311319" [c1cd4c65-07a6-4d53-8f1d-438a8efdcdfa] Running
	I0924 19:52:22.327641   69408 system_pods.go:89] "kube-controller-manager-embed-certs-311319" [eece1531-5f24-4853-9e91-ca29558f3b9d] Running
	I0924 19:52:22.327647   69408 system_pods.go:89] "kube-proxy-h42s7" [76930a49-6a8a-4d02-84b8-8e26f3196ac3] Running
	I0924 19:52:22.327652   69408 system_pods.go:89] "kube-scheduler-embed-certs-311319" [22d20361-552d-4443-bec2-e406919d2966] Running
	I0924 19:52:22.327662   69408 system_pods.go:89] "metrics-server-6867b74b74-xnwm4" [dc64f26b-e4a6-4692-83d5-e6c794c1b130] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:52:22.327671   69408 system_pods.go:89] "storage-provisioner" [766bdfe2-684a-47de-94fd-088795b60e2b] Running
	I0924 19:52:22.327680   69408 system_pods.go:126] duration metric: took 203.762675ms to wait for k8s-apps to be running ...
	I0924 19:52:22.327687   69408 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:52:22.327741   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:52:22.340873   69408 system_svc.go:56] duration metric: took 13.177605ms WaitForService to wait for kubelet
	I0924 19:52:22.340903   69408 kubeadm.go:582] duration metric: took 7.674755249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:52:22.340925   69408 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:52:22.524647   69408 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:52:22.524670   69408 node_conditions.go:123] node cpu capacity is 2
	I0924 19:52:22.524679   69408 node_conditions.go:105] duration metric: took 183.74973ms to run NodePressure ...
	I0924 19:52:22.524688   69408 start.go:241] waiting for startup goroutines ...
	I0924 19:52:22.524695   69408 start.go:246] waiting for cluster config update ...
	I0924 19:52:22.524705   69408 start.go:255] writing updated cluster config ...
	I0924 19:52:22.524994   69408 ssh_runner.go:195] Run: rm -f paused
	I0924 19:52:22.571765   69408 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:52:22.574724   69408 out.go:177] * Done! kubectl is now configured to use "embed-certs-311319" cluster and "default" namespace by default
	I0924 19:52:25.866986   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:52:25.867227   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:05.868563   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:05.868798   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:05.868811   70152 kubeadm.go:310] 
	I0924 19:53:05.868866   70152 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:53:05.868927   70152 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:53:05.868936   70152 kubeadm.go:310] 
	I0924 19:53:05.868989   70152 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:53:05.869037   70152 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:53:05.869201   70152 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:53:05.869212   70152 kubeadm.go:310] 
	I0924 19:53:05.869332   70152 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:53:05.869380   70152 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:53:05.869433   70152 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:53:05.869442   70152 kubeadm.go:310] 
	I0924 19:53:05.869555   70152 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:53:05.869664   70152 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:53:05.869674   70152 kubeadm.go:310] 
	I0924 19:53:05.869792   70152 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:53:05.869900   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:53:05.870003   70152 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:53:05.870132   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:53:05.870172   70152 kubeadm.go:310] 
	I0924 19:53:05.870425   70152 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:53:05.870536   70152 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:53:05.870658   70152 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 19:53:05.870869   70152 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 19:53:05.870918   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:53:06.301673   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:53:06.316103   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:53:06.326362   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:53:06.326396   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:53:06.326454   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:53:06.334687   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:53:06.334744   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:53:06.344175   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:53:06.352663   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:53:06.352725   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:53:06.361955   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:53:06.370584   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:53:06.370625   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:53:06.379590   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:53:06.388768   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:53:06.388825   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:53:06.397242   70152 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:53:06.469463   70152 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:53:06.469547   70152 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:53:06.606743   70152 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:53:06.606900   70152 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:53:06.607021   70152 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:53:06.778104   70152 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:53:06.780036   70152 out.go:235]   - Generating certificates and keys ...
	I0924 19:53:06.780148   70152 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:53:06.780241   70152 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:53:06.780359   70152 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:53:06.780451   70152 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:53:06.780578   70152 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:53:06.780654   70152 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:53:06.780753   70152 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:53:06.780852   70152 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:53:06.780972   70152 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:53:06.781119   70152 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:53:06.781178   70152 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:53:06.781254   70152 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:53:06.836315   70152 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:53:06.938657   70152 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:53:07.273070   70152 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:53:07.347309   70152 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:53:07.369112   70152 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:53:07.369777   70152 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:53:07.369866   70152 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:53:07.504122   70152 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:53:07.506006   70152 out.go:235]   - Booting up control plane ...
	I0924 19:53:07.506117   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:53:07.509132   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:53:07.509998   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:53:07.510662   70152 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:53:07.513856   70152 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:53:47.515377   70152 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:53:47.515684   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:47.515976   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:52.516646   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:52.516842   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:54:02.517539   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:54:02.517808   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:54:22.518364   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:54:22.518605   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:55:02.517378   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:55:02.517642   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:55:02.517672   70152 kubeadm.go:310] 
	I0924 19:55:02.517732   70152 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:55:02.517791   70152 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:55:02.517802   70152 kubeadm.go:310] 
	I0924 19:55:02.517880   70152 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:55:02.517943   70152 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:55:02.518090   70152 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:55:02.518102   70152 kubeadm.go:310] 
	I0924 19:55:02.518239   70152 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:55:02.518289   70152 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:55:02.518347   70152 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:55:02.518358   70152 kubeadm.go:310] 
	I0924 19:55:02.518488   70152 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:55:02.518565   70152 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:55:02.518572   70152 kubeadm.go:310] 
	I0924 19:55:02.518685   70152 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:55:02.518768   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:55:02.518891   70152 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:55:02.518991   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:55:02.519010   70152 kubeadm.go:310] 
	I0924 19:55:02.519626   70152 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:55:02.519745   70152 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:55:02.519839   70152 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 19:55:02.519914   70152 kubeadm.go:394] duration metric: took 8m1.249852968s to StartCluster
	I0924 19:55:02.519952   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:55:02.520008   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:55:02.552844   70152 cri.go:89] found id: ""
	I0924 19:55:02.552880   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.552891   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:55:02.552899   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:55:02.552956   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:55:02.582811   70152 cri.go:89] found id: ""
	I0924 19:55:02.582858   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.582869   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:55:02.582876   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:55:02.582929   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:55:02.614815   70152 cri.go:89] found id: ""
	I0924 19:55:02.614858   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.614868   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:55:02.614874   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:55:02.614920   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:55:02.644953   70152 cri.go:89] found id: ""
	I0924 19:55:02.644982   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.644991   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:55:02.644998   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:55:02.645053   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:55:02.680419   70152 cri.go:89] found id: ""
	I0924 19:55:02.680448   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.680458   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:55:02.680466   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:55:02.680525   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:55:02.713021   70152 cri.go:89] found id: ""
	I0924 19:55:02.713043   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.713051   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:55:02.713057   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:55:02.713118   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:55:02.748326   70152 cri.go:89] found id: ""
	I0924 19:55:02.748350   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.748358   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:55:02.748364   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:55:02.748416   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:55:02.780489   70152 cri.go:89] found id: ""
	I0924 19:55:02.780523   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.780546   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:55:02.780558   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:55:02.780572   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:55:02.830514   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:55:02.830550   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:55:02.845321   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:55:02.845349   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:55:02.909352   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:55:02.909383   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:55:02.909399   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:55:03.033937   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:55:03.033972   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0924 19:55:03.070531   70152 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 19:55:03.070611   70152 out.go:270] * 
	W0924 19:55:03.070682   70152 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:55:03.070701   70152 out.go:270] * 
	W0924 19:55:03.071559   70152 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 19:55:03.074921   70152 out.go:201] 
	W0924 19:55:03.076106   70152 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:55:03.076150   70152 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 19:55:03.076180   70152 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 19:55:03.077787   70152 out.go:201] 
	
	
	==> CRI-O <==
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.794283791Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208007794260457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db4c17b3-f7de-43b0-9737-e626456712b4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.794848796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b71c009-6984-4560-af93-905c4b82b7b0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.794910463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b71c009-6984-4560-af93-905c4b82b7b0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.795333594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d,PodSandboxId:376a3d2bc97fcc7fcd33727dcd123e61f6a732a1af2219b47b79997fc921f4e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207230692066103,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25f7a78-bc14-4613-aed5-ab00c8d39366,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51b423ff2358424fcd82804468580c6cc31f8bfde91d6513620d6c70d5270d7,PodSandboxId:290e5f0c006dac437c4a83e12c9615229b0e446b1f26c6df5577c01fdb69e5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727207209776928197,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 97f162b3-8eb4-4b04-af2b-978373632a7a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80,PodSandboxId:5ea281ea1825d20bde0a5d0fdba0488c82f3ecca30d953d478f3395c10f8e764,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207207600063381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qb2mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d38dedd6-6361-419c-891d-e5a5189776db,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba,PodSandboxId:376a3d2bc97fcc7fcd33727dcd123e61f6a732a1af2219b47b79997fc921f4e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727207200088517256,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
25f7a78-bc14-4613-aed5-ab00c8d39366,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8,PodSandboxId:2f178420dd0670c39ae0ed95268f85e8d3314d226b53436c74b203dfffeb9288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727207199935527912,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7520fc22-94af-4575-8df7-4476677d10
93,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d,PodSandboxId:587d6ccb1745ba76a332c8923a5f87bcd11edaf98522071c002a12e9bec09de3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727207195192787863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 174e1e03afada0c498597533826f4a8a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4,PodSandboxId:9944102a7ac497b62527c2f8c7f70f28a7165a70643d303dad4b8f2ef97c9ac5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727207195154747215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 763974fab4516ef5e6d93eab84e76559,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8,PodSandboxId:b7ea7f8497be3fdd67e55750cba82b260f2a74f128bce9087ebb337c97f1b1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727207195117744694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef486e4e6c075645e3229eeb7938b7a9,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca,PodSandboxId:885bb9e653bf5d3866fbe9ed02ecf7b68d086a88962702ea1615800d1b4cce2c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207195082144947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9dc2cb98884b703fb63a65b935124d,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b71c009-6984-4560-af93-905c4b82b7b0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.831004868Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=199a8dc9-66d1-4a56-82ed-563d208d980e name=/runtime.v1.RuntimeService/Version
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.831078467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=199a8dc9-66d1-4a56-82ed-563d208d980e name=/runtime.v1.RuntimeService/Version
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.832124140Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91b6bffe-eccc-4360-889e-b6cacad3f954 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.832544387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208007832509768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91b6bffe-eccc-4360-889e-b6cacad3f954 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.832950494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=709a18a2-169f-40c8-a5e3-98f5a72b4e1b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.833005880Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=709a18a2-169f-40c8-a5e3-98f5a72b4e1b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.833187288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d,PodSandboxId:376a3d2bc97fcc7fcd33727dcd123e61f6a732a1af2219b47b79997fc921f4e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207230692066103,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25f7a78-bc14-4613-aed5-ab00c8d39366,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51b423ff2358424fcd82804468580c6cc31f8bfde91d6513620d6c70d5270d7,PodSandboxId:290e5f0c006dac437c4a83e12c9615229b0e446b1f26c6df5577c01fdb69e5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727207209776928197,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 97f162b3-8eb4-4b04-af2b-978373632a7a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80,PodSandboxId:5ea281ea1825d20bde0a5d0fdba0488c82f3ecca30d953d478f3395c10f8e764,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207207600063381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qb2mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d38dedd6-6361-419c-891d-e5a5189776db,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba,PodSandboxId:376a3d2bc97fcc7fcd33727dcd123e61f6a732a1af2219b47b79997fc921f4e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727207200088517256,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
25f7a78-bc14-4613-aed5-ab00c8d39366,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8,PodSandboxId:2f178420dd0670c39ae0ed95268f85e8d3314d226b53436c74b203dfffeb9288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727207199935527912,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7520fc22-94af-4575-8df7-4476677d10
93,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d,PodSandboxId:587d6ccb1745ba76a332c8923a5f87bcd11edaf98522071c002a12e9bec09de3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727207195192787863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 174e1e03afada0c498597533826f4a8a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4,PodSandboxId:9944102a7ac497b62527c2f8c7f70f28a7165a70643d303dad4b8f2ef97c9ac5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727207195154747215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 763974fab4516ef5e6d93eab84e76559,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8,PodSandboxId:b7ea7f8497be3fdd67e55750cba82b260f2a74f128bce9087ebb337c97f1b1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727207195117744694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef486e4e6c075645e3229eeb7938b7a9,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca,PodSandboxId:885bb9e653bf5d3866fbe9ed02ecf7b68d086a88962702ea1615800d1b4cce2c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207195082144947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9dc2cb98884b703fb63a65b935124d,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=709a18a2-169f-40c8-a5e3-98f5a72b4e1b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.867537592Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba4e6bcd-27c7-487c-8951-6a8bc73db24b name=/runtime.v1.RuntimeService/Version
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.867610315Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba4e6bcd-27c7-487c-8951-6a8bc73db24b name=/runtime.v1.RuntimeService/Version
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.868681472Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef555bc4-2988-4530-bfc4-a6c2636734d3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.869006779Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208007868986134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef555bc4-2988-4530-bfc4-a6c2636734d3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.869495793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c640f54-3f74-4d82-8f3c-48bee1e7e12a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.869558059Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c640f54-3f74-4d82-8f3c-48bee1e7e12a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.869748552Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d,PodSandboxId:376a3d2bc97fcc7fcd33727dcd123e61f6a732a1af2219b47b79997fc921f4e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207230692066103,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25f7a78-bc14-4613-aed5-ab00c8d39366,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51b423ff2358424fcd82804468580c6cc31f8bfde91d6513620d6c70d5270d7,PodSandboxId:290e5f0c006dac437c4a83e12c9615229b0e446b1f26c6df5577c01fdb69e5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727207209776928197,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 97f162b3-8eb4-4b04-af2b-978373632a7a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80,PodSandboxId:5ea281ea1825d20bde0a5d0fdba0488c82f3ecca30d953d478f3395c10f8e764,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207207600063381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qb2mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d38dedd6-6361-419c-891d-e5a5189776db,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba,PodSandboxId:376a3d2bc97fcc7fcd33727dcd123e61f6a732a1af2219b47b79997fc921f4e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727207200088517256,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
25f7a78-bc14-4613-aed5-ab00c8d39366,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8,PodSandboxId:2f178420dd0670c39ae0ed95268f85e8d3314d226b53436c74b203dfffeb9288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727207199935527912,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7520fc22-94af-4575-8df7-4476677d10
93,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d,PodSandboxId:587d6ccb1745ba76a332c8923a5f87bcd11edaf98522071c002a12e9bec09de3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727207195192787863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 174e1e03afada0c498597533826f4a8a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4,PodSandboxId:9944102a7ac497b62527c2f8c7f70f28a7165a70643d303dad4b8f2ef97c9ac5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727207195154747215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 763974fab4516ef5e6d93eab84e76559,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8,PodSandboxId:b7ea7f8497be3fdd67e55750cba82b260f2a74f128bce9087ebb337c97f1b1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727207195117744694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef486e4e6c075645e3229eeb7938b7a9,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca,PodSandboxId:885bb9e653bf5d3866fbe9ed02ecf7b68d086a88962702ea1615800d1b4cce2c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207195082144947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9dc2cb98884b703fb63a65b935124d,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c640f54-3f74-4d82-8f3c-48bee1e7e12a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.900329849Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5a243c16-f32e-490e-ac80-dd33600318d2 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.900455194Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5a243c16-f32e-490e-ac80-dd33600318d2 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.901576471Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37b73b1a-bae4-453f-9a09-bacedb58f295 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.901902407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208007901875773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37b73b1a-bae4-453f-9a09-bacedb58f295 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.902346936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6fbd2287-c9b7-41ea-a5b2-2ad656fdaf7e name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.902451038Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6fbd2287-c9b7-41ea-a5b2-2ad656fdaf7e name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:07 no-preload-965745 crio[700]: time="2024-09-24 20:00:07.902628952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d,PodSandboxId:376a3d2bc97fcc7fcd33727dcd123e61f6a732a1af2219b47b79997fc921f4e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207230692066103,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25f7a78-bc14-4613-aed5-ab00c8d39366,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51b423ff2358424fcd82804468580c6cc31f8bfde91d6513620d6c70d5270d7,PodSandboxId:290e5f0c006dac437c4a83e12c9615229b0e446b1f26c6df5577c01fdb69e5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727207209776928197,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 97f162b3-8eb4-4b04-af2b-978373632a7a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80,PodSandboxId:5ea281ea1825d20bde0a5d0fdba0488c82f3ecca30d953d478f3395c10f8e764,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207207600063381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qb2mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d38dedd6-6361-419c-891d-e5a5189776db,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba,PodSandboxId:376a3d2bc97fcc7fcd33727dcd123e61f6a732a1af2219b47b79997fc921f4e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727207200088517256,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
25f7a78-bc14-4613-aed5-ab00c8d39366,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8,PodSandboxId:2f178420dd0670c39ae0ed95268f85e8d3314d226b53436c74b203dfffeb9288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727207199935527912,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7520fc22-94af-4575-8df7-4476677d10
93,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d,PodSandboxId:587d6ccb1745ba76a332c8923a5f87bcd11edaf98522071c002a12e9bec09de3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727207195192787863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 174e1e03afada0c498597533826f4a8a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4,PodSandboxId:9944102a7ac497b62527c2f8c7f70f28a7165a70643d303dad4b8f2ef97c9ac5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727207195154747215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 763974fab4516ef5e6d93eab84e76559,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8,PodSandboxId:b7ea7f8497be3fdd67e55750cba82b260f2a74f128bce9087ebb337c97f1b1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727207195117744694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef486e4e6c075645e3229eeb7938b7a9,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca,PodSandboxId:885bb9e653bf5d3866fbe9ed02ecf7b68d086a88962702ea1615800d1b4cce2c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207195082144947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9dc2cb98884b703fb63a65b935124d,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6fbd2287-c9b7-41ea-a5b2-2ad656fdaf7e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	50a3e972e70a2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   376a3d2bc97fc       storage-provisioner
	f51b423ff2358       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   290e5f0c006da       busybox
	5701cbef602b0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   5ea281ea1825d       coredns-7c65d6cfc9-qb2mm
	daabc8f3d80f5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   376a3d2bc97fc       storage-provisioner
	35d91507f646a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   2f178420dd067       kube-proxy-ng8vf
	68e60ea512c88       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   587d6ccb1745b       kube-scheduler-no-preload-965745
	b09b340cd637a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   9944102a7ac49       etcd-no-preload-965745
	b6f32e0b22cfb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   b7ea7f8497be3       kube-controller-manager-no-preload-965745
	8c6b0840dab2d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   885bb9e653bf5       kube-apiserver-no-preload-965745
	
	
	==> coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:39134 - 32408 "HINFO IN 5780760760276393963.1388614174394367891. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.101909183s
	
	
	==> describe nodes <==
	Name:               no-preload-965745
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-965745
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=no-preload-965745
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T19_38_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 19:38:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-965745
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 20:00:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 19:57:21 +0000   Tue, 24 Sep 2024 19:38:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 19:57:21 +0000   Tue, 24 Sep 2024 19:38:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 19:57:21 +0000   Tue, 24 Sep 2024 19:38:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 19:57:21 +0000   Tue, 24 Sep 2024 19:46:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    no-preload-965745
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0a7579b73f843d89d32d738c989e404
	  System UUID:                f0a7579b-73f8-43d8-9d32-d738c989e404
	  Boot ID:                    24d70444-16d0-434e-aeb5-3b94273e684f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-7c65d6cfc9-qb2mm                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-no-preload-965745                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-965745             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-965745    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-ng8vf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-965745             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-w7bfj              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-965745 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node no-preload-965745 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-965745 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-965745 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-965745 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-965745 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node no-preload-965745 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-965745 event: Registered Node no-preload-965745 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-965745 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-965745 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-965745 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-965745 event: Registered Node no-preload-965745 in Controller
	
	
	==> dmesg <==
	[Sep24 19:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.046885] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.035754] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.670442] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.794001] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.534674] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.362037] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.054257] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063605] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.166455] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.149953] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.277343] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[ +14.614382] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[  +0.059924] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.482839] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device
	[  +5.073446] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.934652] systemd-fstab-generator[1972]: Ignoring "noauto" option for root device
	[  +3.276580] kauditd_printk_skb: 61 callbacks suppressed
	[Sep24 19:47] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] <==
	{"level":"warn","ts":"2024-09-24T19:46:44.097002Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T19:46:43.604939Z","time spent":"492.055065ms","remote":"127.0.0.1:40668","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":1,"response size":1160,"request content":"key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" "}
	{"level":"warn","ts":"2024-09-24T19:46:44.096490Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.972778ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-965745\" ","response":"range_response_count:1 size:4646"}
	{"level":"info","ts":"2024-09-24T19:46:44.097105Z","caller":"traceutil/trace.go:171","msg":"trace[1159460695] range","detail":"{range_begin:/registry/minions/no-preload-965745; range_end:; response_count:1; response_revision:590; }","duration":"174.596058ms","start":"2024-09-24T19:46:43.922503Z","end":"2024-09-24T19:46:44.097100Z","steps":["trace[1159460695] 'agreement among raft nodes before linearized reading'  (duration: 173.929963ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T19:46:44.385498Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.931276ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6934578216309035046 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/busybox.17f845cbf923426a\" mod_revision:574 > success:<request_put:<key:\"/registry/events/default/busybox.17f845cbf923426a\" value_size:796 lease:6934578216309034623 >> failure:<request_range:<key:\"/registry/events/default/busybox.17f845cbf923426a\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-24T19:46:44.385572Z","caller":"traceutil/trace.go:171","msg":"trace[586676405] linearizableReadLoop","detail":"{readStateIndex:631; appliedIndex:630; }","duration":"267.702431ms","start":"2024-09-24T19:46:44.117861Z","end":"2024-09-24T19:46:44.385563Z","steps":["trace[586676405] 'read index received'  (duration: 106.616287ms)","trace[586676405] 'applied index is now lower than readState.Index'  (duration: 161.084984ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T19:46:44.385635Z","caller":"traceutil/trace.go:171","msg":"trace[1292513908] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"283.184303ms","start":"2024-09-24T19:46:44.102445Z","end":"2024-09-24T19:46:44.385629Z","steps":["trace[1292513908] 'process raft request'  (duration: 122.072336ms)","trace[1292513908] 'compare'  (duration: 160.8203ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T19:46:44.385751Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.5317ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/metrics-server:system:auth-delegator\" ","response":"range_response_count:1 size:1219"}
	{"level":"info","ts":"2024-09-24T19:46:44.386323Z","caller":"traceutil/trace.go:171","msg":"trace[627618841] range","detail":"{range_begin:/registry/clusterrolebindings/metrics-server:system:auth-delegator; range_end:; response_count:1; response_revision:591; }","duration":"264.109797ms","start":"2024-09-24T19:46:44.122203Z","end":"2024-09-24T19:46:44.386313Z","steps":["trace[627618841] 'agreement among raft nodes before linearized reading'  (duration: 263.440071ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T19:46:44.386012Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"268.144168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:1 size:1210"}
	{"level":"info","ts":"2024-09-24T19:46:44.386517Z","caller":"traceutil/trace.go:171","msg":"trace[1243052314] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:1; response_revision:591; }","duration":"268.652269ms","start":"2024-09-24T19:46:44.117857Z","end":"2024-09-24T19:46:44.386510Z","steps":["trace[1243052314] 'agreement among raft nodes before linearized reading'  (duration: 268.115567ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T19:46:44.850996Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"344.769341ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:metrics-server\" ","response":"range_response_count:1 size:1174"}
	{"level":"warn","ts":"2024-09-24T19:46:44.851014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.082881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:3958"}
	{"level":"info","ts":"2024-09-24T19:46:44.851046Z","caller":"traceutil/trace.go:171","msg":"trace[298194409] range","detail":"{range_begin:/registry/clusterrolebindings/system:metrics-server; range_end:; response_count:1; response_revision:592; }","duration":"344.83205ms","start":"2024-09-24T19:46:44.506202Z","end":"2024-09-24T19:46:44.851034Z","steps":["trace[298194409] 'range keys from in-memory index tree'  (duration: 344.695051ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T19:46:44.851056Z","caller":"traceutil/trace.go:171","msg":"trace[365039085] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:592; }","duration":"273.135155ms","start":"2024-09-24T19:46:44.577910Z","end":"2024-09-24T19:46:44.851045Z","steps":["trace[365039085] 'range keys from in-memory index tree'  (duration: 272.961743ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T19:46:44.851072Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T19:46:44.506167Z","time spent":"344.900301ms","remote":"127.0.0.1:40702","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":1197,"request content":"key:\"/registry/clusterrolebindings/system:metrics-server\" "}
	{"level":"warn","ts":"2024-09-24T19:46:44.851409Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"335.540946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-ng8vf\" ","response":"range_response_count:1 size:4936"}
	{"level":"info","ts":"2024-09-24T19:46:44.851457Z","caller":"traceutil/trace.go:171","msg":"trace[343055841] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-ng8vf; range_end:; response_count:1; response_revision:592; }","duration":"335.5731ms","start":"2024-09-24T19:46:44.515858Z","end":"2024-09-24T19:46:44.851431Z","steps":["trace[343055841] 'range keys from in-memory index tree'  (duration: 335.277408ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T19:46:44.851482Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T19:46:44.515822Z","time spent":"335.653445ms","remote":"127.0.0.1:40488","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":4959,"request content":"key:\"/registry/pods/kube-system/kube-proxy-ng8vf\" "}
	{"level":"info","ts":"2024-09-24T19:46:45.001024Z","caller":"traceutil/trace.go:171","msg":"trace[2047536342] linearizableReadLoop","detail":"{readStateIndex:633; appliedIndex:632; }","duration":"122.356278ms","start":"2024-09-24T19:46:44.878648Z","end":"2024-09-24T19:46:45.001005Z","steps":["trace[2047536342] 'read index received'  (duration: 122.139897ms)","trace[2047536342] 'applied index is now lower than readState.Index'  (duration: 215.574µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T19:46:45.001128Z","caller":"traceutil/trace.go:171","msg":"trace[1791779633] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"134.120192ms","start":"2024-09-24T19:46:44.867000Z","end":"2024-09-24T19:46:45.001120Z","steps":["trace[1791779633] 'process raft request'  (duration: 133.856743ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T19:46:45.001296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.630323ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/metrics-server\" ","response":"range_response_count:1 size:1654"}
	{"level":"info","ts":"2024-09-24T19:46:45.001327Z","caller":"traceutil/trace.go:171","msg":"trace[1451915071] range","detail":"{range_begin:/registry/services/specs/kube-system/metrics-server; range_end:; response_count:1; response_revision:593; }","duration":"122.67448ms","start":"2024-09-24T19:46:44.878645Z","end":"2024-09-24T19:46:45.001319Z","steps":["trace[1451915071] 'agreement among raft nodes before linearized reading'  (duration: 122.599979ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T19:56:36.851839Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":874}
	{"level":"info","ts":"2024-09-24T19:56:36.860453Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":874,"took":"8.37578ms","hash":2438960273,"current-db-size-bytes":2711552,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2711552,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-09-24T19:56:36.860499Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2438960273,"revision":874,"compact-revision":-1}
	
	
	==> kernel <==
	 20:00:08 up 14 min,  0 users,  load average: 0.03, 0.09, 0.09
	Linux no-preload-965745 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] <==
	W0924 19:56:39.608484       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 19:56:39.608574       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 19:56:39.609559       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 19:56:39.609649       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 19:57:39.609974       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 19:57:39.610045       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0924 19:57:39.610149       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 19:57:39.610192       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0924 19:57:39.611201       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 19:57:39.611399       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 19:59:39.611476       1 handler_proxy.go:99] no RequestInfo found in the context
	W0924 19:59:39.611607       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 19:59:39.611655       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0924 19:59:39.611728       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 19:59:39.612851       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 19:59:39.612941       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] <==
	E0924 19:54:42.180413       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:54:42.741732       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:55:12.186766       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:55:12.749099       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:55:42.192506       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:55:42.756717       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:56:12.197826       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:56:12.763652       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:56:42.203449       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:56:42.771430       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:57:12.208655       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:57:12.777187       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 19:57:21.922295       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-965745"
	E0924 19:57:42.215335       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:57:42.784819       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 19:57:45.526933       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="89.536µs"
	I0924 19:58:00.526119       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="52.841µs"
	E0924 19:58:12.221055       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:58:12.791596       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:58:42.227076       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:58:42.799486       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:59:12.233116       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:59:12.805743       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:59:42.238669       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:59:42.812409       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 19:46:40.246129       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 19:46:40.255229       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.134"]
	E0924 19:46:40.255448       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 19:46:40.296558       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 19:46:40.296596       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 19:46:40.296654       1 server_linux.go:169] "Using iptables Proxier"
	I0924 19:46:40.300320       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 19:46:40.300798       1 server.go:483] "Version info" version="v1.31.1"
	I0924 19:46:40.300832       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:46:40.302504       1 config.go:199] "Starting service config controller"
	I0924 19:46:40.302549       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 19:46:40.302581       1 config.go:105] "Starting endpoint slice config controller"
	I0924 19:46:40.302601       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 19:46:40.303309       1 config.go:328] "Starting node config controller"
	I0924 19:46:40.303338       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 19:46:40.403310       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 19:46:40.403466       1 shared_informer.go:320] Caches are synced for service config
	I0924 19:46:40.403478       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] <==
	I0924 19:46:36.231353       1 serving.go:386] Generated self-signed cert in-memory
	W0924 19:46:38.572845       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 19:46:38.572954       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 19:46:38.572984       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 19:46:38.573053       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 19:46:38.631436       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0924 19:46:38.633452       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:46:38.639145       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0924 19:46:38.641558       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0924 19:46:38.643073       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 19:46:38.641631       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0924 19:46:38.743853       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 19:59:01 no-preload-965745 kubelet[1354]: E0924 19:59:01.512870    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w7bfj" podUID="52962ba3-838e-4cb9-9349-ca3760633a12"
	Sep 24 19:59:04 no-preload-965745 kubelet[1354]: E0924 19:59:04.709973    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207944709682168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:59:04 no-preload-965745 kubelet[1354]: E0924 19:59:04.710042    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207944709682168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:59:12 no-preload-965745 kubelet[1354]: E0924 19:59:12.514949    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w7bfj" podUID="52962ba3-838e-4cb9-9349-ca3760633a12"
	Sep 24 19:59:14 no-preload-965745 kubelet[1354]: E0924 19:59:14.712154    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207954711707068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:59:14 no-preload-965745 kubelet[1354]: E0924 19:59:14.712506    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207954711707068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:59:24 no-preload-965745 kubelet[1354]: E0924 19:59:24.513530    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w7bfj" podUID="52962ba3-838e-4cb9-9349-ca3760633a12"
	Sep 24 19:59:24 no-preload-965745 kubelet[1354]: E0924 19:59:24.714017    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207964713732497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:59:24 no-preload-965745 kubelet[1354]: E0924 19:59:24.714057    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207964713732497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:59:34 no-preload-965745 kubelet[1354]: E0924 19:59:34.533619    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 19:59:34 no-preload-965745 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 19:59:34 no-preload-965745 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 19:59:34 no-preload-965745 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 19:59:34 no-preload-965745 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 19:59:34 no-preload-965745 kubelet[1354]: E0924 19:59:34.715880    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207974715534047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:59:34 no-preload-965745 kubelet[1354]: E0924 19:59:34.715906    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207974715534047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:59:39 no-preload-965745 kubelet[1354]: E0924 19:59:39.513162    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w7bfj" podUID="52962ba3-838e-4cb9-9349-ca3760633a12"
	Sep 24 19:59:44 no-preload-965745 kubelet[1354]: E0924 19:59:44.718043    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207984717680043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:59:44 no-preload-965745 kubelet[1354]: E0924 19:59:44.718084    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207984717680043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:59:50 no-preload-965745 kubelet[1354]: E0924 19:59:50.512766    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w7bfj" podUID="52962ba3-838e-4cb9-9349-ca3760633a12"
	Sep 24 19:59:54 no-preload-965745 kubelet[1354]: E0924 19:59:54.719455    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207994719153068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:59:54 no-preload-965745 kubelet[1354]: E0924 19:59:54.719498    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207994719153068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:03 no-preload-965745 kubelet[1354]: E0924 20:00:03.513225    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w7bfj" podUID="52962ba3-838e-4cb9-9349-ca3760633a12"
	Sep 24 20:00:04 no-preload-965745 kubelet[1354]: E0924 20:00:04.721145    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208004720625712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:04 no-preload-965745 kubelet[1354]: E0924 20:00:04.721476    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208004720625712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] <==
	I0924 19:47:10.773086       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 19:47:10.783341       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 19:47:10.783486       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 19:47:28.184285       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 19:47:28.184581       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-965745_0ab79c9f-5372-4405-9a64-c1efac65c62f!
	I0924 19:47:28.184908       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8166aaa3-4b4c-449e-a89c-dbccda9e331c", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-965745_0ab79c9f-5372-4405-9a64-c1efac65c62f became leader
	I0924 19:47:28.287779       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-965745_0ab79c9f-5372-4405-9a64-c1efac65c62f!
	
	
	==> storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] <==
	I0924 19:46:40.200996       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0924 19:47:10.203840       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-965745 -n no-preload-965745
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-965745 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-w7bfj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-965745 describe pod metrics-server-6867b74b74-w7bfj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-965745 describe pod metrics-server-6867b74b74-w7bfj: exit status 1 (65.871506ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-w7bfj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-965745 describe pod metrics-server-6867b74b74-w7bfj: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0924 19:51:57.223178   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-093771 -n default-k8s-diff-port-093771
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-24 20:00:53.496244773 +0000 UTC m=+6062.598256480
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-093771 -n default-k8s-diff-port-093771
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-093771 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-093771 logs -n 25: (2.052491319s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-038637 sudo cat                              | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:38 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo find                             | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo crio                             | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-038637                                       | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-119609 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | disable-driver-mounts-119609                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:39 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-311319            | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-965745             | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-093771  | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-510301        | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-311319                 | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-965745                  | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-093771       | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:51 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-510301             | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 19:42:46
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 19:42:46.491955   70152 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:42:46.492212   70152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:42:46.492222   70152 out.go:358] Setting ErrFile to fd 2...
	I0924 19:42:46.492228   70152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:42:46.492386   70152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:42:46.492893   70152 out.go:352] Setting JSON to false
	I0924 19:42:46.493799   70152 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5117,"bootTime":1727201849,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 19:42:46.493899   70152 start.go:139] virtualization: kvm guest
	I0924 19:42:46.496073   70152 out.go:177] * [old-k8s-version-510301] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 19:42:46.497447   70152 notify.go:220] Checking for updates...
	I0924 19:42:46.497466   70152 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 19:42:46.498899   70152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 19:42:46.500315   70152 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:42:46.502038   70152 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:42:46.503591   70152 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 19:42:46.505010   70152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 19:42:46.506789   70152 config.go:182] Loaded profile config "old-k8s-version-510301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:42:46.507239   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:42:46.507282   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:42:46.522338   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43977
	I0924 19:42:46.522810   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:42:46.523430   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:42:46.523450   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:42:46.523809   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:42:46.523989   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:42:46.525830   70152 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 19:42:46.527032   70152 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 19:42:46.527327   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:42:46.527361   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:42:46.542427   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37825
	I0924 19:42:46.542782   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:42:46.543220   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:42:46.543237   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:42:46.543562   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:42:46.543731   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:42:46.577253   70152 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 19:42:46.578471   70152 start.go:297] selected driver: kvm2
	I0924 19:42:46.578486   70152 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:42:46.578620   70152 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 19:42:46.579480   70152 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:42:46.579576   70152 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 19:42:46.595023   70152 install.go:137] /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 19:42:46.595376   70152 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:42:46.595401   70152 cni.go:84] Creating CNI manager for ""
	I0924 19:42:46.595427   70152 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:42:46.595456   70152 start.go:340] cluster config:
	{Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:42:46.595544   70152 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:42:46.597600   70152 out.go:177] * Starting "old-k8s-version-510301" primary control-plane node in "old-k8s-version-510301" cluster
	I0924 19:42:49.587099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:42:46.599107   70152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:42:46.599145   70152 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 19:42:46.599157   70152 cache.go:56] Caching tarball of preloaded images
	I0924 19:42:46.599232   70152 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 19:42:46.599246   70152 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 19:42:46.599368   70152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json ...
	I0924 19:42:46.599577   70152 start.go:360] acquireMachinesLock for old-k8s-version-510301: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:42:52.659112   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:42:58.739082   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:01.811107   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:07.891031   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:10.963093   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:17.043125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:20.115055   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:26.195121   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:29.267111   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:35.347125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:38.419109   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:44.499098   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:47.571040   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:53.651128   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:56.723110   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:02.803080   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:05.875118   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:11.955117   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:15.027102   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:21.107097   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:24.179122   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:30.259099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:33.331130   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:39.411086   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:42.483063   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:48.563071   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:51.635087   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:57.715125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:00.787050   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:06.867122   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:09.939097   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:16.019098   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:19.091109   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:25.171099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:28.243075   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:34.323040   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:37.395180   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:43.475096   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:46.547060   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:52.627035   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:55.699131   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:58.703628   69576 start.go:364] duration metric: took 4m21.10107111s to acquireMachinesLock for "no-preload-965745"
	I0924 19:45:58.703677   69576 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:45:58.703682   69576 fix.go:54] fixHost starting: 
	I0924 19:45:58.704078   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:45:58.704123   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:45:58.719888   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32803
	I0924 19:45:58.720250   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:45:58.720694   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:45:58.720714   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:45:58.721073   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:45:58.721262   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:45:58.721419   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:45:58.723062   69576 fix.go:112] recreateIfNeeded on no-preload-965745: state=Stopped err=<nil>
	I0924 19:45:58.723086   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	W0924 19:45:58.723253   69576 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:45:58.725047   69576 out.go:177] * Restarting existing kvm2 VM for "no-preload-965745" ...
	I0924 19:45:58.701057   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:45:58.701123   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:45:58.701448   69408 buildroot.go:166] provisioning hostname "embed-certs-311319"
	I0924 19:45:58.701474   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:45:58.701688   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:45:58.703495   69408 machine.go:96] duration metric: took 4m37.423499364s to provisionDockerMachine
	I0924 19:45:58.703530   69408 fix.go:56] duration metric: took 4m37.446368089s for fixHost
	I0924 19:45:58.703536   69408 start.go:83] releasing machines lock for "embed-certs-311319", held for 4m37.446384972s
	W0924 19:45:58.703575   69408 start.go:714] error starting host: provision: host is not running
	W0924 19:45:58.703648   69408 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0924 19:45:58.703659   69408 start.go:729] Will try again in 5 seconds ...
	I0924 19:45:58.726232   69576 main.go:141] libmachine: (no-preload-965745) Calling .Start
	I0924 19:45:58.726397   69576 main.go:141] libmachine: (no-preload-965745) Ensuring networks are active...
	I0924 19:45:58.727100   69576 main.go:141] libmachine: (no-preload-965745) Ensuring network default is active
	I0924 19:45:58.727392   69576 main.go:141] libmachine: (no-preload-965745) Ensuring network mk-no-preload-965745 is active
	I0924 19:45:58.727758   69576 main.go:141] libmachine: (no-preload-965745) Getting domain xml...
	I0924 19:45:58.728339   69576 main.go:141] libmachine: (no-preload-965745) Creating domain...
	I0924 19:45:59.928391   69576 main.go:141] libmachine: (no-preload-965745) Waiting to get IP...
	I0924 19:45:59.929441   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:45:59.929931   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:45:59.929982   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:45:59.929905   70821 retry.go:31] will retry after 231.188723ms: waiting for machine to come up
	I0924 19:46:00.162502   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.162993   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.163021   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.162944   70821 retry.go:31] will retry after 278.953753ms: waiting for machine to come up
	I0924 19:46:00.443443   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.443868   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.443895   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.443830   70821 retry.go:31] will retry after 307.192984ms: waiting for machine to come up
	I0924 19:46:00.752227   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.752637   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.752666   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.752602   70821 retry.go:31] will retry after 596.967087ms: waiting for machine to come up
	I0924 19:46:01.351461   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:01.351906   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:01.351933   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:01.351859   70821 retry.go:31] will retry after 579.94365ms: waiting for machine to come up
	I0924 19:46:01.933682   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:01.934110   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:01.934141   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:01.934070   70821 retry.go:31] will retry after 862.980289ms: waiting for machine to come up
	I0924 19:46:03.705206   69408 start.go:360] acquireMachinesLock for embed-certs-311319: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:46:02.799129   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:02.799442   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:02.799471   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:02.799394   70821 retry.go:31] will retry after 992.898394ms: waiting for machine to come up
	I0924 19:46:03.794034   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:03.794462   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:03.794518   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:03.794440   70821 retry.go:31] will retry after 917.82796ms: waiting for machine to come up
	I0924 19:46:04.713515   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:04.713888   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:04.713911   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:04.713861   70821 retry.go:31] will retry after 1.30142733s: waiting for machine to come up
	I0924 19:46:06.017327   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:06.017868   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:06.017891   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:06.017835   70821 retry.go:31] will retry after 1.585023602s: waiting for machine to come up
	I0924 19:46:07.603787   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:07.604129   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:07.604148   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:07.604108   70821 retry.go:31] will retry after 2.382871382s: waiting for machine to come up
	I0924 19:46:09.989065   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:09.989530   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:09.989592   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:09.989504   70821 retry.go:31] will retry after 3.009655055s: waiting for machine to come up
	I0924 19:46:17.011094   69904 start.go:364] duration metric: took 3m57.677491969s to acquireMachinesLock for "default-k8s-diff-port-093771"
	I0924 19:46:17.011169   69904 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:17.011180   69904 fix.go:54] fixHost starting: 
	I0924 19:46:17.011578   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:17.011648   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:17.030756   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I0924 19:46:17.031186   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:17.031698   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:46:17.031722   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:17.032028   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:17.032198   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:17.032340   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:46:17.033737   69904 fix.go:112] recreateIfNeeded on default-k8s-diff-port-093771: state=Stopped err=<nil>
	I0924 19:46:17.033761   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	W0924 19:46:17.033912   69904 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:17.036154   69904 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-093771" ...
	I0924 19:46:13.001046   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:13.001487   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:13.001518   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:13.001448   70821 retry.go:31] will retry after 2.789870388s: waiting for machine to come up
	I0924 19:46:15.792496   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.793014   69576 main.go:141] libmachine: (no-preload-965745) Found IP for machine: 192.168.39.134
	I0924 19:46:15.793035   69576 main.go:141] libmachine: (no-preload-965745) Reserving static IP address...
	I0924 19:46:15.793051   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has current primary IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.793564   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "no-preload-965745", mac: "52:54:00:c4:4b:79", ip: "192.168.39.134"} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.793590   69576 main.go:141] libmachine: (no-preload-965745) DBG | skip adding static IP to network mk-no-preload-965745 - found existing host DHCP lease matching {name: "no-preload-965745", mac: "52:54:00:c4:4b:79", ip: "192.168.39.134"}
	I0924 19:46:15.793602   69576 main.go:141] libmachine: (no-preload-965745) Reserved static IP address: 192.168.39.134
	I0924 19:46:15.793631   69576 main.go:141] libmachine: (no-preload-965745) DBG | Getting to WaitForSSH function...
	I0924 19:46:15.793643   69576 main.go:141] libmachine: (no-preload-965745) Waiting for SSH to be available...
	I0924 19:46:15.795732   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.796002   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.796023   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.796169   69576 main.go:141] libmachine: (no-preload-965745) DBG | Using SSH client type: external
	I0924 19:46:15.796196   69576 main.go:141] libmachine: (no-preload-965745) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa (-rw-------)
	I0924 19:46:15.796227   69576 main.go:141] libmachine: (no-preload-965745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:15.796241   69576 main.go:141] libmachine: (no-preload-965745) DBG | About to run SSH command:
	I0924 19:46:15.796247   69576 main.go:141] libmachine: (no-preload-965745) DBG | exit 0
	I0924 19:46:15.922480   69576 main.go:141] libmachine: (no-preload-965745) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:15.922886   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetConfigRaw
	I0924 19:46:15.923532   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:15.925814   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.926152   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.926180   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.926341   69576 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/config.json ...
	I0924 19:46:15.926506   69576 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:15.926523   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:15.926755   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:15.929175   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.929512   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.929539   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.929647   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:15.929805   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:15.929956   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:15.930041   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:15.930184   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:15.930374   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:15.930386   69576 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:16.038990   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:16.039018   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.039241   69576 buildroot.go:166] provisioning hostname "no-preload-965745"
	I0924 19:46:16.039266   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.039459   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.042183   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.042567   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.042603   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.042728   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.042929   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.043085   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.043264   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.043431   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.043611   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.043624   69576 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-965745 && echo "no-preload-965745" | sudo tee /etc/hostname
	I0924 19:46:16.163262   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-965745
	
	I0924 19:46:16.163289   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.165935   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.166256   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.166276   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.166415   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.166602   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.166728   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.166876   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.167005   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.167219   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.167244   69576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-965745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-965745/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-965745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:16.282661   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:16.282689   69576 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:16.282714   69576 buildroot.go:174] setting up certificates
	I0924 19:46:16.282723   69576 provision.go:84] configureAuth start
	I0924 19:46:16.282734   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.283017   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:16.285665   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.286113   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.286140   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.286283   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.288440   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.288750   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.288775   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.288932   69576 provision.go:143] copyHostCerts
	I0924 19:46:16.288984   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:16.288996   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:16.289093   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:16.289206   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:16.289221   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:16.289265   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:16.289341   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:16.289350   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:16.289385   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:16.289451   69576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.no-preload-965745 san=[127.0.0.1 192.168.39.134 localhost minikube no-preload-965745]
	I0924 19:46:16.400236   69576 provision.go:177] copyRemoteCerts
	I0924 19:46:16.400302   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:16.400330   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.402770   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.403069   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.403107   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.403226   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.403415   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.403678   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.403826   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:16.488224   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:16.509856   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 19:46:16.531212   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:16.552758   69576 provision.go:87] duration metric: took 270.023746ms to configureAuth
	I0924 19:46:16.552787   69576 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:16.552980   69576 config.go:182] Loaded profile config "no-preload-965745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:16.553045   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.555463   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.555792   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.555812   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.555992   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.556190   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.556337   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.556447   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.556569   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.556756   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.556774   69576 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:16.777283   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:16.777305   69576 machine.go:96] duration metric: took 850.787273ms to provisionDockerMachine
	I0924 19:46:16.777318   69576 start.go:293] postStartSetup for "no-preload-965745" (driver="kvm2")
	I0924 19:46:16.777330   69576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:16.777348   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:16.777726   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:16.777751   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.780187   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.780591   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.780632   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.780812   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.781015   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.781163   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.781359   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:16.864642   69576 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:16.868296   69576 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:16.868317   69576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:16.868379   69576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:16.868456   69576 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:16.868549   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:16.877019   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:16.898717   69576 start.go:296] duration metric: took 121.386885ms for postStartSetup
	I0924 19:46:16.898752   69576 fix.go:56] duration metric: took 18.195069583s for fixHost
	I0924 19:46:16.898772   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.901284   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.901593   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.901620   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.901773   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.901965   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.902143   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.902278   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.902416   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.902572   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.902580   69576 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:17.010942   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207176.987992125
	
	I0924 19:46:17.010968   69576 fix.go:216] guest clock: 1727207176.987992125
	I0924 19:46:17.010977   69576 fix.go:229] Guest: 2024-09-24 19:46:16.987992125 +0000 UTC Remote: 2024-09-24 19:46:16.898755451 +0000 UTC m=+279.432619611 (delta=89.236674ms)
	I0924 19:46:17.011002   69576 fix.go:200] guest clock delta is within tolerance: 89.236674ms
	I0924 19:46:17.011008   69576 start.go:83] releasing machines lock for "no-preload-965745", held for 18.307345605s
	I0924 19:46:17.011044   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.011314   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:17.014130   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.014475   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.014510   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.014661   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015160   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015331   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015443   69576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:17.015485   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:17.015512   69576 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:17.015536   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:17.018062   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018324   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018392   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.018416   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018531   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:17.018681   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:17.018754   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.018805   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018814   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:17.018956   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:17.019039   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:17.019130   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:17.019295   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:17.019483   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:17.120138   69576 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:17.125567   69576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:17.269403   69576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:17.275170   69576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:17.275229   69576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:17.290350   69576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:17.290374   69576 start.go:495] detecting cgroup driver to use...
	I0924 19:46:17.290437   69576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:17.310059   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:17.323377   69576 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:17.323440   69576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:17.336247   69576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:17.349168   69576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:17.461240   69576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:17.606562   69576 docker.go:233] disabling docker service ...
	I0924 19:46:17.606632   69576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:17.623001   69576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:17.637472   69576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:17.778735   69576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:17.905408   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:17.921465   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:17.938193   69576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:46:17.938265   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.947686   69576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:17.947748   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.957230   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.966507   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.975768   69576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:17.985288   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.995405   69576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:18.011401   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:18.024030   69576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:18.034873   69576 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:18.034939   69576 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:18.047359   69576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:18.057288   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:18.181067   69576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:18.272703   69576 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:18.272779   69576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:18.277272   69576 start.go:563] Will wait 60s for crictl version
	I0924 19:46:18.277338   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.280914   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:18.319509   69576 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:18.319603   69576 ssh_runner.go:195] Run: crio --version
	I0924 19:46:18.349619   69576 ssh_runner.go:195] Run: crio --version
	I0924 19:46:18.376567   69576 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:46:17.037598   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Start
	I0924 19:46:17.037763   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring networks are active...
	I0924 19:46:17.038517   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring network default is active
	I0924 19:46:17.038875   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring network mk-default-k8s-diff-port-093771 is active
	I0924 19:46:17.039247   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Getting domain xml...
	I0924 19:46:17.039971   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Creating domain...
	I0924 19:46:18.369133   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting to get IP...
	I0924 19:46:18.370069   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.370537   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.370589   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.370490   70958 retry.go:31] will retry after 309.496724ms: waiting for machine to come up
	I0924 19:46:18.682355   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.682933   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.682982   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.682901   70958 retry.go:31] will retry after 274.120659ms: waiting for machine to come up
	I0924 19:46:18.958554   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.959017   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.959044   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.958981   70958 retry.go:31] will retry after 301.44935ms: waiting for machine to come up
	I0924 19:46:18.377928   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:18.380767   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:18.381227   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:18.381343   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:18.381519   69576 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:18.385510   69576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:18.398125   69576 kubeadm.go:883] updating cluster {Name:no-preload-965745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:18.398269   69576 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:46:18.398324   69576 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:18.433136   69576 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:46:18.433158   69576 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 19:46:18.433221   69576 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:18.433232   69576 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.433266   69576 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.433288   69576 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.433295   69576 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.433348   69576 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.433369   69576 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0924 19:46:18.433406   69576 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.435096   69576 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.435095   69576 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.435130   69576 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0924 19:46:18.435125   69576 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.435167   69576 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.435282   69576 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.435312   69576 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:18.435355   69576 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.586269   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.594361   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.594399   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.595814   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.600629   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.625054   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.626264   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0924 19:46:18.648420   69576 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0924 19:46:18.648471   69576 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.648519   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.736906   69576 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0924 19:46:18.736967   69576 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.736995   69576 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0924 19:46:18.737033   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.737038   69576 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.736924   69576 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0924 19:46:18.737086   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.737094   69576 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.737129   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.738294   69576 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0924 19:46:18.738322   69576 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.738372   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.759842   69576 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0924 19:46:18.759877   69576 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.759920   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.863913   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.864011   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.863924   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.863940   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.863970   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.863980   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.982915   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.982954   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.983003   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:19.005899   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:19.005922   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:19.005993   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:19.085255   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:19.085357   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:19.085385   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:19.140884   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:19.140951   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:19.141049   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:19.186906   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0924 19:46:19.187032   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.190934   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0924 19:46:19.191034   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:19.219210   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0924 19:46:19.219345   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:19.250400   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0924 19:46:19.250433   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0924 19:46:19.250510   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:19.250510   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0924 19:46:19.250541   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0924 19:46:19.250557   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.250511   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:19.250575   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0924 19:46:19.250589   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:19.250595   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0924 19:46:19.250597   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.263357   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0924 19:46:19.422736   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:21.705978   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.455378333s)
	I0924 19:46:21.706013   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.455386133s)
	I0924 19:46:21.706050   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0924 19:46:21.706075   69576 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:21.706086   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.455478137s)
	I0924 19:46:21.706116   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0924 19:46:21.706023   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0924 19:46:21.706127   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:21.706162   69576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.283401294s)
	I0924 19:46:21.706195   69576 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0924 19:46:21.706223   69576 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:21.706267   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:19.262500   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.263016   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.263065   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:19.262997   70958 retry.go:31] will retry after 463.004617ms: waiting for machine to come up
	I0924 19:46:19.727528   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.728017   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.728039   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:19.727972   70958 retry.go:31] will retry after 463.942506ms: waiting for machine to come up
	I0924 19:46:20.193614   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.194039   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.194066   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:20.193993   70958 retry.go:31] will retry after 595.200456ms: waiting for machine to come up
	I0924 19:46:20.790814   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.791264   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.791290   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:20.791229   70958 retry.go:31] will retry after 862.850861ms: waiting for machine to come up
	I0924 19:46:21.655227   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:21.655702   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:21.655732   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:21.655652   70958 retry.go:31] will retry after 1.436744818s: waiting for machine to come up
	I0924 19:46:23.093891   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:23.094619   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:23.094652   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:23.094545   70958 retry.go:31] will retry after 1.670034049s: waiting for machine to come up
	I0924 19:46:23.573866   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.867718194s)
	I0924 19:46:23.573911   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0924 19:46:23.573942   69576 ssh_runner.go:235] Completed: which crictl: (1.867653076s)
	I0924 19:46:23.574009   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:23.573947   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:23.574079   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:24.924292   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.35018601s)
	I0924 19:46:24.924325   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0924 19:46:24.924325   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.350292754s)
	I0924 19:46:24.924351   69576 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:24.924400   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:24.924400   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:24.765982   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:24.766453   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:24.766486   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:24.766399   70958 retry.go:31] will retry after 2.142103801s: waiting for machine to come up
	I0924 19:46:26.911998   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:26.912395   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:26.912425   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:26.912350   70958 retry.go:31] will retry after 1.90953864s: waiting for machine to come up
	I0924 19:46:28.823807   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:28.824294   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:28.824324   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:28.824242   70958 retry.go:31] will retry after 2.249657554s: waiting for machine to come up
	I0924 19:46:28.202705   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.278273074s)
	I0924 19:46:28.202736   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0924 19:46:28.202759   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:28.202781   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.278300546s)
	I0924 19:46:28.202798   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:28.202862   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:29.870161   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.667334937s)
	I0924 19:46:29.870195   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0924 19:46:29.870161   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.667273921s)
	I0924 19:46:29.870218   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:29.870248   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0924 19:46:29.870269   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:29.870357   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.922800   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.05250542s)
	I0924 19:46:31.922865   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0924 19:46:31.922894   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.052511751s)
	I0924 19:46:31.922928   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0924 19:46:31.922938   69576 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.922996   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.076197   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:31.076624   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:31.076660   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:31.076579   70958 retry.go:31] will retry after 3.538260641s: waiting for machine to come up
	I0924 19:46:35.823566   70152 start.go:364] duration metric: took 3m49.223945366s to acquireMachinesLock for "old-k8s-version-510301"
	I0924 19:46:35.823654   70152 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:35.823666   70152 fix.go:54] fixHost starting: 
	I0924 19:46:35.824101   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:35.824161   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:35.844327   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38055
	I0924 19:46:35.844741   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:35.845377   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:46:35.845402   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:35.845769   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:35.845997   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:35.846186   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetState
	I0924 19:46:35.847728   70152 fix.go:112] recreateIfNeeded on old-k8s-version-510301: state=Stopped err=<nil>
	I0924 19:46:35.847754   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	W0924 19:46:35.847912   70152 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:35.849981   70152 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-510301" ...
	I0924 19:46:35.851388   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .Start
	I0924 19:46:35.851573   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring networks are active...
	I0924 19:46:35.852445   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring network default is active
	I0924 19:46:35.852832   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring network mk-old-k8s-version-510301 is active
	I0924 19:46:35.853342   70152 main.go:141] libmachine: (old-k8s-version-510301) Getting domain xml...
	I0924 19:46:35.854028   70152 main.go:141] libmachine: (old-k8s-version-510301) Creating domain...
	I0924 19:46:34.618473   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.618980   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Found IP for machine: 192.168.50.116
	I0924 19:46:34.619006   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Reserving static IP address...
	I0924 19:46:34.619022   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has current primary IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.619475   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-093771", mac: "52:54:00:21:4a:f5", ip: "192.168.50.116"} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.619520   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Reserved static IP address: 192.168.50.116
	I0924 19:46:34.619540   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | skip adding static IP to network mk-default-k8s-diff-port-093771 - found existing host DHCP lease matching {name: "default-k8s-diff-port-093771", mac: "52:54:00:21:4a:f5", ip: "192.168.50.116"}
	I0924 19:46:34.619559   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Getting to WaitForSSH function...
	I0924 19:46:34.619573   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for SSH to be available...
	I0924 19:46:34.621893   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.622318   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.622346   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.622525   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Using SSH client type: external
	I0924 19:46:34.622553   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa (-rw-------)
	I0924 19:46:34.622584   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:34.622603   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | About to run SSH command:
	I0924 19:46:34.622621   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | exit 0
	I0924 19:46:34.746905   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:34.747246   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetConfigRaw
	I0924 19:46:34.747867   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:34.750507   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.751020   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.751052   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.751327   69904 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/config.json ...
	I0924 19:46:34.751516   69904 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:34.751533   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:34.751773   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.754088   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.754380   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.754400   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.754510   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.754703   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.754988   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.755201   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.755479   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.755714   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.755727   69904 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:34.854791   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:34.854816   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:34.855126   69904 buildroot.go:166] provisioning hostname "default-k8s-diff-port-093771"
	I0924 19:46:34.855157   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:34.855362   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.858116   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.858459   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.858491   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.858639   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.858821   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.859002   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.859124   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.859281   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.859444   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.859458   69904 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-093771 && echo "default-k8s-diff-port-093771" | sudo tee /etc/hostname
	I0924 19:46:34.974247   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-093771
	
	I0924 19:46:34.974285   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.977117   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.977514   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.977544   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.977781   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.978011   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.978184   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.978326   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.978512   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.978736   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.978761   69904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-093771' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-093771/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-093771' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:35.096102   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:35.096132   69904 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:35.096172   69904 buildroot.go:174] setting up certificates
	I0924 19:46:35.096182   69904 provision.go:84] configureAuth start
	I0924 19:46:35.096192   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:35.096501   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:35.099177   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.099529   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.099563   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.099743   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.102392   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.102744   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.102771   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.102941   69904 provision.go:143] copyHostCerts
	I0924 19:46:35.102988   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:35.102996   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:35.103053   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:35.103147   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:35.103155   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:35.103176   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:35.103229   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:35.103237   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:35.103255   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:35.103319   69904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-093771 san=[127.0.0.1 192.168.50.116 default-k8s-diff-port-093771 localhost minikube]
	I0924 19:46:35.213279   69904 provision.go:177] copyRemoteCerts
	I0924 19:46:35.213364   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:35.213396   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.216668   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.217114   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.217150   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.217374   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.217544   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.217759   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.217937   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.300483   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:35.323893   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0924 19:46:35.346838   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:35.368788   69904 provision.go:87] duration metric: took 272.591773ms to configureAuth
	I0924 19:46:35.368819   69904 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:35.369032   69904 config.go:182] Loaded profile config "default-k8s-diff-port-093771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:35.369107   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.372264   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.372571   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.372601   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.372833   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.373033   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.373221   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.373395   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.373595   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:35.373768   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:35.373800   69904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:35.593954   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:35.593983   69904 machine.go:96] duration metric: took 842.454798ms to provisionDockerMachine
	I0924 19:46:35.593998   69904 start.go:293] postStartSetup for "default-k8s-diff-port-093771" (driver="kvm2")
	I0924 19:46:35.594011   69904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:35.594032   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.594381   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:35.594415   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.597073   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.597475   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.597531   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.597668   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.597886   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.598061   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.598225   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.677749   69904 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:35.682185   69904 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:35.682220   69904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:35.682302   69904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:35.682402   69904 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:35.682514   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:35.692308   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:35.717006   69904 start.go:296] duration metric: took 122.993776ms for postStartSetup
	I0924 19:46:35.717045   69904 fix.go:56] duration metric: took 18.705866197s for fixHost
	I0924 19:46:35.717069   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.720111   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.720478   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.720507   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.720702   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.720913   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.721078   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.721208   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.721368   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:35.721547   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:35.721558   69904 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:35.823421   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207195.798332273
	
	I0924 19:46:35.823444   69904 fix.go:216] guest clock: 1727207195.798332273
	I0924 19:46:35.823454   69904 fix.go:229] Guest: 2024-09-24 19:46:35.798332273 +0000 UTC Remote: 2024-09-24 19:46:35.717049796 +0000 UTC m=+256.522802974 (delta=81.282477ms)
	I0924 19:46:35.823478   69904 fix.go:200] guest clock delta is within tolerance: 81.282477ms
	I0924 19:46:35.823484   69904 start.go:83] releasing machines lock for "default-k8s-diff-port-093771", held for 18.812344302s
	I0924 19:46:35.823511   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.823795   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:35.827240   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.827580   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.827612   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.827798   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828501   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828695   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828788   69904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:35.828840   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.828982   69904 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:35.829022   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.831719   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.831888   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832098   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.832125   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832350   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.832419   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.832446   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832518   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.832608   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.832688   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.832761   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.832834   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.832898   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.833000   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.913010   69904 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:35.936917   69904 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:36.082528   69904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:36.090012   69904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:36.090111   69904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:36.109409   69904 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:36.109434   69904 start.go:495] detecting cgroup driver to use...
	I0924 19:46:36.109509   69904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:36.130226   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:36.142975   69904 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:36.143037   69904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:36.159722   69904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:36.174702   69904 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:36.315361   69904 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:36.491190   69904 docker.go:233] disabling docker service ...
	I0924 19:46:36.491259   69904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:36.513843   69904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:36.530208   69904 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:36.658600   69904 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:36.806048   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:36.821825   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:36.841750   69904 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:46:36.841819   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.853349   69904 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:36.853432   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.865214   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.877600   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.889363   69904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:36.901434   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.911763   69904 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.929057   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.939719   69904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:36.949326   69904 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:36.949399   69904 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:36.969647   69904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:36.984522   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:37.132041   69904 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:37.238531   69904 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:37.238638   69904 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:37.243752   69904 start.go:563] Will wait 60s for crictl version
	I0924 19:46:37.243811   69904 ssh_runner.go:195] Run: which crictl
	I0924 19:46:37.247683   69904 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:37.282843   69904 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:37.282932   69904 ssh_runner.go:195] Run: crio --version
	I0924 19:46:37.318022   69904 ssh_runner.go:195] Run: crio --version
	I0924 19:46:37.356586   69904 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:46:32.569181   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0924 19:46:32.569229   69576 cache_images.go:123] Successfully loaded all cached images
	I0924 19:46:32.569236   69576 cache_images.go:92] duration metric: took 14.136066072s to LoadCachedImages
	I0924 19:46:32.569250   69576 kubeadm.go:934] updating node { 192.168.39.134 8443 v1.31.1 crio true true} ...
	I0924 19:46:32.569372   69576 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-965745 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:46:32.569453   69576 ssh_runner.go:195] Run: crio config
	I0924 19:46:32.610207   69576 cni.go:84] Creating CNI manager for ""
	I0924 19:46:32.610236   69576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:32.610247   69576 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:46:32.610284   69576 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.134 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-965745 NodeName:no-preload-965745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:46:32.610407   69576 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-965745"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:46:32.610465   69576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:46:32.620532   69576 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:46:32.620616   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:46:32.629642   69576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 19:46:32.644863   69576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:46:32.659420   69576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0924 19:46:32.674590   69576 ssh_runner.go:195] Run: grep 192.168.39.134	control-plane.minikube.internal$ /etc/hosts
	I0924 19:46:32.677861   69576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:32.688560   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:32.791827   69576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:32.807240   69576 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745 for IP: 192.168.39.134
	I0924 19:46:32.807266   69576 certs.go:194] generating shared ca certs ...
	I0924 19:46:32.807286   69576 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:32.807447   69576 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:46:32.807502   69576 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:46:32.807515   69576 certs.go:256] generating profile certs ...
	I0924 19:46:32.807645   69576 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/client.key
	I0924 19:46:32.807736   69576 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.key.6934b726
	I0924 19:46:32.807799   69576 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.key
	I0924 19:46:32.807950   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:46:32.807997   69576 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:46:32.808011   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:46:32.808045   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:46:32.808076   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:46:32.808111   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:46:32.808168   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:32.809039   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:46:32.866086   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:46:32.892458   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:46:32.925601   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:46:32.956936   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 19:46:32.979570   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 19:46:33.001159   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:46:33.022216   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 19:46:33.044213   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:46:33.065352   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:46:33.086229   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:46:33.107040   69576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:46:33.122285   69576 ssh_runner.go:195] Run: openssl version
	I0924 19:46:33.127664   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:46:33.137277   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.141239   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.141289   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.146498   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:46:33.156352   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:46:33.166235   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.170189   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.170233   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.175345   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:46:33.185095   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:46:33.194846   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.199024   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.199084   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.204244   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:46:33.214142   69576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:46:33.218178   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:46:33.223659   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:46:33.228914   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:46:33.234183   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:46:33.239611   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:46:33.244844   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:46:33.250012   69576 kubeadm.go:392] StartCluster: {Name:no-preload-965745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:46:33.250094   69576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:46:33.250128   69576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:33.282919   69576 cri.go:89] found id: ""
	I0924 19:46:33.282980   69576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:46:33.292578   69576 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:46:33.292605   69576 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:46:33.292665   69576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:46:33.301695   69576 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:46:33.303477   69576 kubeconfig.go:125] found "no-preload-965745" server: "https://192.168.39.134:8443"
	I0924 19:46:33.306052   69576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:46:33.314805   69576 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.134
	I0924 19:46:33.314843   69576 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:46:33.314857   69576 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:46:33.314907   69576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:33.346457   69576 cri.go:89] found id: ""
	I0924 19:46:33.346523   69576 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:46:33.361257   69576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:46:33.370192   69576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:46:33.370209   69576 kubeadm.go:157] found existing configuration files:
	
	I0924 19:46:33.370246   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:46:33.378693   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:46:33.378735   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:46:33.387379   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:46:33.395516   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:46:33.395555   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:46:33.404216   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:46:33.412518   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:46:33.412564   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:46:33.421332   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:46:33.430004   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:46:33.430067   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:46:33.438769   69576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:46:33.447918   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:33.547090   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.162139   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.345688   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.400915   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.479925   69576 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:46:34.480005   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:34.980773   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:35.480568   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:35.515707   69576 api_server.go:72] duration metric: took 1.035779291s to wait for apiserver process to appear ...
	I0924 19:46:35.515736   69576 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:46:35.515759   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:37.357928   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:37.361222   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:37.361720   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:37.361763   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:37.362089   69904 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:37.366395   69904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:37.383334   69904 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-093771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:37.383451   69904 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:46:37.383503   69904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:37.425454   69904 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:46:37.425528   69904 ssh_runner.go:195] Run: which lz4
	I0924 19:46:37.430589   69904 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:46:37.435668   69904 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:46:37.435702   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 19:46:38.688183   69904 crio.go:462] duration metric: took 1.257629121s to copy over tarball
	I0924 19:46:38.688265   69904 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:46:38.577925   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:38.577956   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:38.577971   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:38.617929   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:38.617970   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:39.015942   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:39.024069   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:39.024108   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:39.516830   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:39.522389   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:39.522423   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:40.015905   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:40.024316   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:40.024344   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:40.515871   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:40.524708   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0924 19:46:40.533300   69576 api_server.go:141] control plane version: v1.31.1
	I0924 19:46:40.533330   69576 api_server.go:131] duration metric: took 5.017586868s to wait for apiserver health ...
	I0924 19:46:40.533341   69576 cni.go:84] Creating CNI manager for ""
	I0924 19:46:40.533350   69576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:40.535207   69576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:46:37.184620   70152 main.go:141] libmachine: (old-k8s-version-510301) Waiting to get IP...
	I0924 19:46:37.185660   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.186074   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.186151   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.186052   71118 retry.go:31] will retry after 294.949392ms: waiting for machine to come up
	I0924 19:46:37.482814   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.483327   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.483356   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.483268   71118 retry.go:31] will retry after 344.498534ms: waiting for machine to come up
	I0924 19:46:37.830045   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.830715   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.830748   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.830647   71118 retry.go:31] will retry after 342.025563ms: waiting for machine to come up
	I0924 19:46:38.174408   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:38.176008   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:38.176040   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:38.175906   71118 retry.go:31] will retry after 456.814011ms: waiting for machine to come up
	I0924 19:46:38.634792   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:38.635533   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:38.635566   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:38.635443   71118 retry.go:31] will retry after 582.88697ms: waiting for machine to come up
	I0924 19:46:39.220373   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:39.220869   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:39.220899   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:39.220811   71118 retry.go:31] will retry after 648.981338ms: waiting for machine to come up
	I0924 19:46:39.872016   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:39.872615   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:39.872645   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:39.872571   71118 retry.go:31] will retry after 1.138842254s: waiting for machine to come up
	I0924 19:46:41.012974   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:41.013539   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:41.013575   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:41.013489   71118 retry.go:31] will retry after 996.193977ms: waiting for machine to come up
	I0924 19:46:40.536733   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:46:40.547944   69576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:46:40.577608   69576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:46:40.595845   69576 system_pods.go:59] 8 kube-system pods found
	I0924 19:46:40.595910   69576 system_pods.go:61] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:46:40.595922   69576 system_pods.go:61] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:46:40.595934   69576 system_pods.go:61] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:46:40.595947   69576 system_pods.go:61] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:46:40.595957   69576 system_pods.go:61] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 19:46:40.595967   69576 system_pods.go:61] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:46:40.595980   69576 system_pods.go:61] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:46:40.595986   69576 system_pods.go:61] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:46:40.595995   69576 system_pods.go:74] duration metric: took 18.365618ms to wait for pod list to return data ...
	I0924 19:46:40.596006   69576 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:46:40.599781   69576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:46:40.599809   69576 node_conditions.go:123] node cpu capacity is 2
	I0924 19:46:40.599822   69576 node_conditions.go:105] duration metric: took 3.810089ms to run NodePressure ...
	I0924 19:46:40.599842   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:40.916081   69576 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:46:40.921516   69576 kubeadm.go:739] kubelet initialised
	I0924 19:46:40.921545   69576 kubeadm.go:740] duration metric: took 5.434388ms waiting for restarted kubelet to initialise ...
	I0924 19:46:40.921569   69576 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:40.926954   69576 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.931807   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.931825   69576 pod_ready.go:82] duration metric: took 4.85217ms for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.931833   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.931840   69576 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.936614   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "etcd-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.936636   69576 pod_ready.go:82] duration metric: took 4.788888ms for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.936646   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "etcd-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.936654   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.941669   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-apiserver-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.941684   69576 pod_ready.go:82] duration metric: took 5.022921ms for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.941691   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-apiserver-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.941697   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.981457   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.981487   69576 pod_ready.go:82] duration metric: took 39.779589ms for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.981500   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.981512   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:41.381145   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-proxy-ng8vf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.381172   69576 pod_ready.go:82] duration metric: took 399.651445ms for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:41.381183   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-proxy-ng8vf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.381191   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:41.780780   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-scheduler-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.780802   69576 pod_ready.go:82] duration metric: took 399.60413ms for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:41.780811   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-scheduler-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.780818   69576 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:42.181235   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:42.181264   69576 pod_ready.go:82] duration metric: took 400.43573ms for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:42.181278   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:42.181287   69576 pod_ready.go:39] duration metric: took 1.259692411s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:42.181306   69576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:46:42.192253   69576 ops.go:34] apiserver oom_adj: -16
	I0924 19:46:42.192274   69576 kubeadm.go:597] duration metric: took 8.899661487s to restartPrimaryControlPlane
	I0924 19:46:42.192285   69576 kubeadm.go:394] duration metric: took 8.942279683s to StartCluster
	I0924 19:46:42.192302   69576 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:42.192388   69576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:46:42.194586   69576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:42.194926   69576 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:46:42.195047   69576 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:46:42.195118   69576 addons.go:69] Setting storage-provisioner=true in profile "no-preload-965745"
	I0924 19:46:42.195137   69576 addons.go:234] Setting addon storage-provisioner=true in "no-preload-965745"
	W0924 19:46:42.195145   69576 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:46:42.195150   69576 addons.go:69] Setting default-storageclass=true in profile "no-preload-965745"
	I0924 19:46:42.195167   69576 addons.go:69] Setting metrics-server=true in profile "no-preload-965745"
	I0924 19:46:42.195174   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.195177   69576 config.go:182] Loaded profile config "no-preload-965745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:42.195182   69576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-965745"
	I0924 19:46:42.195185   69576 addons.go:234] Setting addon metrics-server=true in "no-preload-965745"
	W0924 19:46:42.195194   69576 addons.go:243] addon metrics-server should already be in state true
	I0924 19:46:42.195219   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.195593   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195609   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195629   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195643   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.195658   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.195736   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.196723   69576 out.go:177] * Verifying Kubernetes components...
	I0924 19:46:42.198152   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:42.212617   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0924 19:46:42.213165   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.213669   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.213695   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.214078   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.214268   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.216100   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45549
	I0924 19:46:42.216467   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.216915   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.216934   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.217274   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.217317   69576 addons.go:234] Setting addon default-storageclass=true in "no-preload-965745"
	W0924 19:46:42.217329   69576 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:46:42.217357   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.217629   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.217666   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.217870   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.217915   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.236569   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I0924 19:46:42.236995   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.236999   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I0924 19:46:42.237477   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.237606   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.237630   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.237989   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.238081   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.238103   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.238605   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.238645   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.238851   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.239570   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.239624   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.243303   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0924 19:46:42.243749   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.244205   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.244225   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.244541   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.244860   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.246518   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.248349   69576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:42.249690   69576 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:46:42.249706   69576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:46:42.249724   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.256169   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0924 19:46:42.256413   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.256626   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.256648   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.256801   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.256952   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.257080   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.257136   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.257247   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.257656   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.257671   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.257975   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.258190   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.259449   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34329
	I0924 19:46:42.259667   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.260521   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.260996   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.261009   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.261374   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.261457   69576 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:46:42.261544   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.262754   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:46:42.262769   69576 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:46:42.262787   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.263351   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.263661   69576 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:46:42.263677   69576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:46:42.263691   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.266205   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.266653   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.266672   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.266974   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.267122   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.267234   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.267342   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.267589   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.267935   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.267951   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.268213   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.268331   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.268417   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.268562   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.408715   69576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:42.425635   69576 node_ready.go:35] waiting up to 6m0s for node "no-preload-965745" to be "Ready" ...
	I0924 19:46:40.944536   69904 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256242572s)
	I0924 19:46:40.944565   69904 crio.go:469] duration metric: took 2.25635162s to extract the tarball
	I0924 19:46:40.944574   69904 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:46:40.981609   69904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:41.019006   69904 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 19:46:41.019026   69904 cache_images.go:84] Images are preloaded, skipping loading
	I0924 19:46:41.019035   69904 kubeadm.go:934] updating node { 192.168.50.116 8444 v1.31.1 crio true true} ...
	I0924 19:46:41.019146   69904 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-093771 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:46:41.019233   69904 ssh_runner.go:195] Run: crio config
	I0924 19:46:41.064904   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:46:41.064927   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:41.064938   69904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:46:41.064957   69904 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.116 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-093771 NodeName:default-k8s-diff-port-093771 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:46:41.065089   69904 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.116
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-093771"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:46:41.065142   69904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:46:41.075518   69904 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:46:41.075604   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:46:41.084461   69904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0924 19:46:41.099383   69904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:46:41.114093   69904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0924 19:46:41.129287   69904 ssh_runner.go:195] Run: grep 192.168.50.116	control-plane.minikube.internal$ /etc/hosts
	I0924 19:46:41.132690   69904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:41.144620   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:41.258218   69904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:41.279350   69904 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771 for IP: 192.168.50.116
	I0924 19:46:41.279373   69904 certs.go:194] generating shared ca certs ...
	I0924 19:46:41.279393   69904 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:41.279592   69904 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:46:41.279668   69904 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:46:41.279685   69904 certs.go:256] generating profile certs ...
	I0924 19:46:41.279806   69904 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/client.key
	I0924 19:46:41.279905   69904 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.key.ee3880b0
	I0924 19:46:41.279968   69904 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.key
	I0924 19:46:41.280139   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:46:41.280176   69904 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:46:41.280189   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:46:41.280248   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:46:41.280292   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:46:41.280324   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:46:41.280379   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:41.281191   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:46:41.319225   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:46:41.343585   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:46:41.373080   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:46:41.405007   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0924 19:46:41.434543   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 19:46:41.458642   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:46:41.480848   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:46:41.502778   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:46:41.525217   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:46:41.548290   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:46:41.572569   69904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:46:41.591631   69904 ssh_runner.go:195] Run: openssl version
	I0924 19:46:41.598407   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:46:41.611310   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.616372   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.616425   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.621818   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:46:41.631262   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:46:41.641685   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.645781   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.645827   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.651168   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:46:41.664296   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:46:41.677001   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.681609   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.681650   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.686733   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:46:41.696235   69904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:46:41.700431   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:46:41.705979   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:46:41.711363   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:46:41.716911   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:46:41.722137   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:46:41.727363   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:46:41.732646   69904 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-093771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:46:41.732750   69904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:46:41.732791   69904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:41.766796   69904 cri.go:89] found id: ""
	I0924 19:46:41.766883   69904 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:46:41.776244   69904 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:46:41.776268   69904 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:46:41.776316   69904 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:46:41.786769   69904 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:46:41.787665   69904 kubeconfig.go:125] found "default-k8s-diff-port-093771" server: "https://192.168.50.116:8444"
	I0924 19:46:41.789591   69904 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:46:41.798561   69904 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.116
	I0924 19:46:41.798596   69904 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:46:41.798617   69904 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:46:41.798661   69904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:41.839392   69904 cri.go:89] found id: ""
	I0924 19:46:41.839469   69904 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:46:41.854464   69904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:46:41.863006   69904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:46:41.863023   69904 kubeadm.go:157] found existing configuration files:
	
	I0924 19:46:41.863082   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 19:46:41.871086   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:46:41.871138   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:46:41.880003   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 19:46:41.890123   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:46:41.890171   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:46:41.901736   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 19:46:41.909613   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:46:41.909670   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:46:41.921595   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 19:46:41.932589   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:46:41.932654   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:46:41.943735   69904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:46:41.952064   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:42.065934   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:42.948388   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.183687   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.264336   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.353897   69904 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:46:43.353979   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:43.854330   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:42.514864   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:46:42.533161   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:46:42.533181   69576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:46:42.539876   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:46:42.564401   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:46:42.564427   69576 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:46:42.598218   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:46:42.598243   69576 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:46:42.619014   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:46:44.487219   69576 node_ready.go:53] node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:45.026145   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.511239735s)
	I0924 19:46:45.026401   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.026416   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.026281   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.486373933s)
	I0924 19:46:45.026501   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.026514   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030099   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030118   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030151   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030162   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030166   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030171   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.030175   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030179   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030184   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.030192   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030494   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030544   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030562   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030634   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030662   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.041980   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.042007   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.042336   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.042391   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.042424   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.120637   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.501525022s)
	I0924 19:46:45.120699   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.120714   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.121114   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.121173   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.121197   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.121222   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.122653   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.122671   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.122683   69576 addons.go:475] Verifying addon metrics-server=true in "no-preload-965745"
	I0924 19:46:45.124698   69576 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 19:46:42.011562   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:42.011963   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:42.011986   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:42.011932   71118 retry.go:31] will retry after 1.827996528s: waiting for machine to come up
	I0924 19:46:43.841529   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:43.842075   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:43.842106   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:43.842030   71118 retry.go:31] will retry after 2.224896366s: waiting for machine to come up
	I0924 19:46:46.068290   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:46.068761   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:46.068784   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:46.068736   71118 retry.go:31] will retry after 2.630690322s: waiting for machine to come up
	I0924 19:46:45.126030   69576 addons.go:510] duration metric: took 2.930987175s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 19:46:46.930203   69576 node_ready.go:53] node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:44.354690   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:44.854316   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:45.354861   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:45.370596   69904 api_server.go:72] duration metric: took 2.016695722s to wait for apiserver process to appear ...
	I0924 19:46:45.370626   69904 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:46:45.370655   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:45.371182   69904 api_server.go:269] stopped: https://192.168.50.116:8444/healthz: Get "https://192.168.50.116:8444/healthz": dial tcp 192.168.50.116:8444: connect: connection refused
	I0924 19:46:45.870725   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.042928   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:48.042957   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:48.042985   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.054732   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:48.054759   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:48.371230   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.381025   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:48.381058   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:48.871669   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.878407   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:48.878440   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:49.371018   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:49.375917   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 200:
	ok
	I0924 19:46:49.383318   69904 api_server.go:141] control plane version: v1.31.1
	I0924 19:46:49.383352   69904 api_server.go:131] duration metric: took 4.012718503s to wait for apiserver health ...
	I0924 19:46:49.383362   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:46:49.383368   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:49.385326   69904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:46:48.700927   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:48.701338   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:48.701367   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:48.701291   71118 retry.go:31] will retry after 3.546152526s: waiting for machine to come up
	I0924 19:46:48.934204   69576 node_ready.go:49] node "no-preload-965745" has status "Ready":"True"
	I0924 19:46:48.934238   69576 node_ready.go:38] duration metric: took 6.508559983s for node "no-preload-965745" to be "Ready" ...
	I0924 19:46:48.934250   69576 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:48.941949   69576 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:48.947063   69576 pod_ready.go:93] pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:48.947094   69576 pod_ready.go:82] duration metric: took 5.112983ms for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:48.947106   69576 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.953349   69576 pod_ready.go:103] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:53.519204   69408 start.go:364] duration metric: took 49.813943111s to acquireMachinesLock for "embed-certs-311319"
	I0924 19:46:53.519255   69408 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:53.519264   69408 fix.go:54] fixHost starting: 
	I0924 19:46:53.519644   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:53.519688   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:53.536327   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0924 19:46:53.536874   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:53.537424   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:46:53.537449   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:53.537804   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:53.538009   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:46:53.538172   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:46:53.539842   69408 fix.go:112] recreateIfNeeded on embed-certs-311319: state=Stopped err=<nil>
	I0924 19:46:53.539866   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	W0924 19:46:53.540003   69408 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:53.541719   69408 out.go:177] * Restarting existing kvm2 VM for "embed-certs-311319" ...
	I0924 19:46:49.386740   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:46:49.398816   69904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:46:49.416805   69904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:46:49.428112   69904 system_pods.go:59] 8 kube-system pods found
	I0924 19:46:49.428153   69904 system_pods.go:61] "coredns-7c65d6cfc9-h4nm8" [621c3ebb-1eb3-47a4-ba87-68e9caa2f3f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:46:49.428175   69904 system_pods.go:61] "etcd-default-k8s-diff-port-093771" [4251f310-2a54-4473-91ba-0aa57247a8e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:46:49.428196   69904 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-093771" [13840d0f-dca8-4b9e-876f-e664bd2ec6e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:46:49.428210   69904 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-093771" [30bbbd4d-8609-47fd-9a9f-373a5b63d785] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:46:49.428220   69904 system_pods.go:61] "kube-proxy-4gx4g" [de627472-1155-4ce3-b910-15657e93988e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 19:46:49.428232   69904 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-093771" [b1edae56-d98a-4fc8-8a99-c6e27f485c91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:46:49.428244   69904 system_pods.go:61] "metrics-server-6867b74b74-rgcll" [11de5d03-9c99-4536-9cfd-b33fe2e11fae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:46:49.428256   69904 system_pods.go:61] "storage-provisioner" [3c29f75e-1570-42cd-8430-284527878197] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 19:46:49.428269   69904 system_pods.go:74] duration metric: took 11.441258ms to wait for pod list to return data ...
	I0924 19:46:49.428288   69904 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:46:49.432173   69904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:46:49.432198   69904 node_conditions.go:123] node cpu capacity is 2
	I0924 19:46:49.432207   69904 node_conditions.go:105] duration metric: took 3.913746ms to run NodePressure ...
	I0924 19:46:49.432221   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:49.707599   69904 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:46:49.712788   69904 kubeadm.go:739] kubelet initialised
	I0924 19:46:49.712808   69904 kubeadm.go:740] duration metric: took 5.18017ms waiting for restarted kubelet to initialise ...
	I0924 19:46:49.712816   69904 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:49.725245   69904 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.731600   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.731624   69904 pod_ready.go:82] duration metric: took 6.354998ms for pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.731633   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.731639   69904 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.737044   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.737067   69904 pod_ready.go:82] duration metric: took 5.419976ms for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.737083   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.737092   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.742151   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.742170   69904 pod_ready.go:82] duration metric: took 5.067452ms for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.742180   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.742185   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.823203   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.823237   69904 pod_ready.go:82] duration metric: took 81.044673ms for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.823253   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.823262   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4gx4g" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.220171   69904 pod_ready.go:93] pod "kube-proxy-4gx4g" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:50.220207   69904 pod_ready.go:82] duration metric: took 396.929531ms for pod "kube-proxy-4gx4g" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.220219   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:52.227683   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:52.249370   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.249921   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has current primary IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.249953   70152 main.go:141] libmachine: (old-k8s-version-510301) Found IP for machine: 192.168.72.81
	I0924 19:46:52.249967   70152 main.go:141] libmachine: (old-k8s-version-510301) Reserving static IP address...
	I0924 19:46:52.250395   70152 main.go:141] libmachine: (old-k8s-version-510301) Reserved static IP address: 192.168.72.81
	I0924 19:46:52.250438   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "old-k8s-version-510301", mac: "52:54:00:72:11:f0", ip: "192.168.72.81"} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.250453   70152 main.go:141] libmachine: (old-k8s-version-510301) Waiting for SSH to be available...
	I0924 19:46:52.250479   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | skip adding static IP to network mk-old-k8s-version-510301 - found existing host DHCP lease matching {name: "old-k8s-version-510301", mac: "52:54:00:72:11:f0", ip: "192.168.72.81"}
	I0924 19:46:52.250492   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Getting to WaitForSSH function...
	I0924 19:46:52.252807   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.253148   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.253176   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.253278   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using SSH client type: external
	I0924 19:46:52.253300   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa (-rw-------)
	I0924 19:46:52.253332   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:52.253345   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | About to run SSH command:
	I0924 19:46:52.253354   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | exit 0
	I0924 19:46:52.378625   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:52.379067   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetConfigRaw
	I0924 19:46:52.379793   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:52.382222   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.382618   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.382647   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.382925   70152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json ...
	I0924 19:46:52.383148   70152 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:52.383174   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:52.383374   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.385984   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.386434   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.386460   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.386614   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.386788   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.387002   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.387167   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.387396   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.387632   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.387645   70152 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:52.503003   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:52.503033   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.503320   70152 buildroot.go:166] provisioning hostname "old-k8s-version-510301"
	I0924 19:46:52.503344   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.503630   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.506502   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.506817   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.506858   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.507028   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.507216   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.507394   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.507584   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.507792   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.508016   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.508034   70152 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-510301 && echo "old-k8s-version-510301" | sudo tee /etc/hostname
	I0924 19:46:52.634014   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-510301
	
	I0924 19:46:52.634040   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.636807   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.637156   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.637186   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.637331   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.637528   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.637721   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.637866   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.638016   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.638228   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.638252   70152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-510301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-510301/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-510301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:52.754583   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:52.754613   70152 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:52.754645   70152 buildroot.go:174] setting up certificates
	I0924 19:46:52.754653   70152 provision.go:84] configureAuth start
	I0924 19:46:52.754664   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.754975   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:52.757674   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.758024   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.758047   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.758158   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.760405   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.760722   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.760751   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.760869   70152 provision.go:143] copyHostCerts
	I0924 19:46:52.760928   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:52.760942   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:52.761009   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:52.761125   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:52.761141   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:52.761180   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:52.761262   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:52.761274   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:52.761301   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:52.761375   70152 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-510301 san=[127.0.0.1 192.168.72.81 localhost minikube old-k8s-version-510301]
	I0924 19:46:52.906522   70152 provision.go:177] copyRemoteCerts
	I0924 19:46:52.906586   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:52.906606   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.909264   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.909580   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.909622   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.909777   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.909960   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.910206   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.910313   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:52.997129   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:53.020405   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 19:46:53.042194   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:53.063422   70152 provision.go:87] duration metric: took 308.753857ms to configureAuth
	I0924 19:46:53.063448   70152 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:53.063662   70152 config.go:182] Loaded profile config "old-k8s-version-510301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:46:53.063752   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.066435   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.066850   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.066877   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.067076   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.067247   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.067382   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.067546   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.067749   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:53.067935   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:53.067958   70152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:53.288436   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:53.288463   70152 machine.go:96] duration metric: took 905.298763ms to provisionDockerMachine
	I0924 19:46:53.288476   70152 start.go:293] postStartSetup for "old-k8s-version-510301" (driver="kvm2")
	I0924 19:46:53.288486   70152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:53.288513   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.288841   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:53.288869   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.291363   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.291643   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.291660   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.291867   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.292054   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.292210   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.292337   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.372984   70152 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:53.377049   70152 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:53.377072   70152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:53.377158   70152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:53.377250   70152 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:53.377339   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:53.385950   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:53.408609   70152 start.go:296] duration metric: took 120.112789ms for postStartSetup
	I0924 19:46:53.408654   70152 fix.go:56] duration metric: took 17.584988201s for fixHost
	I0924 19:46:53.408677   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.411723   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.412100   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.412124   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.412309   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.412544   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.412752   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.412892   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.413075   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:53.413260   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:53.413272   70152 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:53.519060   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207213.488062061
	
	I0924 19:46:53.519081   70152 fix.go:216] guest clock: 1727207213.488062061
	I0924 19:46:53.519090   70152 fix.go:229] Guest: 2024-09-24 19:46:53.488062061 +0000 UTC Remote: 2024-09-24 19:46:53.408658589 +0000 UTC m=+246.951196346 (delta=79.403472ms)
	I0924 19:46:53.519120   70152 fix.go:200] guest clock delta is within tolerance: 79.403472ms
	I0924 19:46:53.519127   70152 start.go:83] releasing machines lock for "old-k8s-version-510301", held for 17.695500754s
	I0924 19:46:53.519158   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.519439   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:53.522059   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.522454   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.522483   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.522639   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523144   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523344   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523432   70152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:53.523470   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.523577   70152 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:53.523614   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.526336   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.526804   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.526845   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.526874   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.527024   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.527216   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.527354   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.527358   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.527382   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.527484   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.527599   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.527742   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.527925   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.528073   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.625956   70152 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:53.631927   70152 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:53.769800   70152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:53.776028   70152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:53.776076   70152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:53.792442   70152 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:53.792476   70152 start.go:495] detecting cgroup driver to use...
	I0924 19:46:53.792558   70152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:53.813239   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:53.827951   70152 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:53.828011   70152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:53.840962   70152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:53.853498   70152 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:53.957380   70152 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:54.123019   70152 docker.go:233] disabling docker service ...
	I0924 19:46:54.123087   70152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:54.138033   70152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:54.153414   70152 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:54.286761   70152 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:54.411013   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:54.432184   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:54.449924   70152 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 19:46:54.450001   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.459689   70152 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:54.459745   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.469555   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.480875   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.490860   70152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:54.503933   70152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:54.513383   70152 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:54.513444   70152 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:54.527180   70152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:54.539778   70152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:54.676320   70152 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:54.774914   70152 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:54.775027   70152 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:54.780383   70152 start.go:563] Will wait 60s for crictl version
	I0924 19:46:54.780457   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:54.785066   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:54.825711   70152 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:54.825792   70152 ssh_runner.go:195] Run: crio --version
	I0924 19:46:54.861643   70152 ssh_runner.go:195] Run: crio --version
	I0924 19:46:54.905425   70152 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 19:46:53.542904   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Start
	I0924 19:46:53.543092   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring networks are active...
	I0924 19:46:53.543799   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring network default is active
	I0924 19:46:53.544155   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring network mk-embed-certs-311319 is active
	I0924 19:46:53.544586   69408 main.go:141] libmachine: (embed-certs-311319) Getting domain xml...
	I0924 19:46:53.545860   69408 main.go:141] libmachine: (embed-certs-311319) Creating domain...
	I0924 19:46:54.960285   69408 main.go:141] libmachine: (embed-certs-311319) Waiting to get IP...
	I0924 19:46:54.961237   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:54.961738   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:54.961831   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:54.961724   71297 retry.go:31] will retry after 193.067485ms: waiting for machine to come up
	I0924 19:46:55.156270   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:55.156850   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:55.156881   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:55.156806   71297 retry.go:31] will retry after 374.820173ms: waiting for machine to come up
	I0924 19:46:55.533606   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:55.534201   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:55.534235   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:55.534160   71297 retry.go:31] will retry after 469.993304ms: waiting for machine to come up
	I0924 19:46:56.005971   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:56.006513   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:56.006544   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:56.006471   71297 retry.go:31] will retry after 418.910837ms: waiting for machine to come up
	I0924 19:46:54.906585   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:54.909353   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:54.909736   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:54.909766   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:54.909970   70152 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:54.915290   70152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:54.927316   70152 kubeadm.go:883] updating cluster {Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:54.927427   70152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:46:54.927465   70152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:54.971020   70152 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:46:54.971090   70152 ssh_runner.go:195] Run: which lz4
	I0924 19:46:54.975775   70152 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:46:54.979807   70152 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:46:54.979865   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 19:46:56.372682   70152 crio.go:462] duration metric: took 1.396951861s to copy over tarball
	I0924 19:46:56.372750   70152 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:46:53.453495   69576 pod_ready.go:103] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:53.954341   69576 pod_ready.go:93] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.954366   69576 pod_ready.go:82] duration metric: took 5.007252183s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.954375   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.959461   69576 pod_ready.go:93] pod "kube-apiserver-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.959485   69576 pod_ready.go:82] duration metric: took 5.103045ms for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.959498   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.964289   69576 pod_ready.go:93] pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.964316   69576 pod_ready.go:82] duration metric: took 4.809404ms for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.964329   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.968263   69576 pod_ready.go:93] pod "kube-proxy-ng8vf" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.968286   69576 pod_ready.go:82] duration metric: took 3.947497ms for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.968296   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.971899   69576 pod_ready.go:93] pod "kube-scheduler-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.971916   69576 pod_ready.go:82] duration metric: took 3.613023ms for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.971924   69576 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:55.980226   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:54.728787   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:57.226216   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:59.227939   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:56.427214   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:56.427600   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:56.427638   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:56.427551   71297 retry.go:31] will retry after 631.22309ms: waiting for machine to come up
	I0924 19:46:57.059888   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:57.060269   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:57.060299   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:57.060219   71297 retry.go:31] will retry after 833.784855ms: waiting for machine to come up
	I0924 19:46:57.895228   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:57.895693   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:57.895711   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:57.895641   71297 retry.go:31] will retry after 1.12615573s: waiting for machine to come up
	I0924 19:46:59.023342   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:59.023824   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:59.023853   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:59.023770   71297 retry.go:31] will retry after 1.020351559s: waiting for machine to come up
	I0924 19:47:00.045373   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:00.045833   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:00.045860   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:00.045779   71297 retry.go:31] will retry after 1.127245815s: waiting for machine to come up
	I0924 19:46:59.298055   70152 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.925272101s)
	I0924 19:46:59.298082   70152 crio.go:469] duration metric: took 2.925375511s to extract the tarball
	I0924 19:46:59.298091   70152 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:46:59.340896   70152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:59.374335   70152 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:46:59.374358   70152 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 19:46:59.374431   70152 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:59.374463   70152 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.374468   70152 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.374489   70152 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.374514   70152 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.374434   70152 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.374582   70152 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.374624   70152 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 19:46:59.375796   70152 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.375857   70152 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.375925   70152 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.375869   70152 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.376062   70152 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.376154   70152 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:59.376357   70152 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.376419   70152 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 19:46:59.521289   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.525037   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.526549   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.536791   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.545312   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.553847   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 19:46:59.558387   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.611119   70152 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 19:46:59.611167   70152 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.611219   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.659190   70152 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 19:46:59.659234   70152 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.659282   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.660489   70152 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 19:46:59.660522   70152 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 19:46:59.660529   70152 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.660558   70152 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.660591   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.660596   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.686686   70152 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 19:46:59.686728   70152 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.686777   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698274   70152 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 19:46:59.698313   70152 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 19:46:59.698366   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698379   70152 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 19:46:59.698410   70152 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.698449   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698451   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.698462   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.698523   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.698527   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.698573   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.795169   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.795179   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.795201   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.805639   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.817474   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.817485   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.817538   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:46:59.917772   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.921025   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.929651   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.955330   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.955344   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.969966   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:46:59.969966   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:47:00.058059   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 19:47:00.058134   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 19:47:00.058178   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 19:47:00.078489   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 19:47:00.078543   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 19:47:00.091137   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:47:00.091212   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:47:00.132385   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 19:47:00.140154   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 19:47:00.328511   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:47:00.468550   70152 cache_images.go:92] duration metric: took 1.094174976s to LoadCachedImages
	W0924 19:47:00.468674   70152 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0924 19:47:00.468693   70152 kubeadm.go:934] updating node { 192.168.72.81 8443 v1.20.0 crio true true} ...
	I0924 19:47:00.468831   70152 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-510301 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:47:00.468918   70152 ssh_runner.go:195] Run: crio config
	I0924 19:47:00.521799   70152 cni.go:84] Creating CNI manager for ""
	I0924 19:47:00.521826   70152 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:00.521836   70152 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:47:00.521858   70152 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-510301 NodeName:old-k8s-version-510301 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 19:47:00.521992   70152 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-510301"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:47:00.522051   70152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 19:47:00.534799   70152 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:47:00.534888   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:47:00.546863   70152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0924 19:47:00.565623   70152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:47:00.583242   70152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0924 19:47:00.600113   70152 ssh_runner.go:195] Run: grep 192.168.72.81	control-plane.minikube.internal$ /etc/hosts
	I0924 19:47:00.603653   70152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:00.618699   70152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:00.746348   70152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:47:00.767201   70152 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301 for IP: 192.168.72.81
	I0924 19:47:00.767228   70152 certs.go:194] generating shared ca certs ...
	I0924 19:47:00.767246   70152 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:00.767418   70152 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:47:00.767468   70152 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:47:00.767482   70152 certs.go:256] generating profile certs ...
	I0924 19:47:00.767607   70152 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/client.key
	I0924 19:47:00.767675   70152 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key.32de9897
	I0924 19:47:00.767726   70152 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key
	I0924 19:47:00.767866   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:47:00.767903   70152 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:47:00.767916   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:47:00.767950   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:47:00.767980   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:47:00.768013   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:47:00.768064   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:00.768651   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:47:00.819295   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:47:00.858368   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:47:00.903694   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:47:00.930441   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 19:47:00.960346   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 19:47:00.988938   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:47:01.014165   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:47:01.038384   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:47:01.061430   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:47:01.083761   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:47:01.105996   70152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:47:01.121529   70152 ssh_runner.go:195] Run: openssl version
	I0924 19:47:01.127294   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:47:01.139547   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.143897   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.143956   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.149555   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:47:01.159823   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:47:01.170730   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.175500   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.175635   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.181445   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:47:01.194810   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:47:01.205193   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.209256   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.209316   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.214946   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:47:01.225368   70152 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:47:01.229833   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:47:01.235652   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:47:01.241158   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:47:01.248213   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:47:01.255001   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:47:01.262990   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:47:01.270069   70152 kubeadm.go:392] StartCluster: {Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:47:01.270166   70152 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:47:01.270211   70152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:01.310648   70152 cri.go:89] found id: ""
	I0924 19:47:01.310759   70152 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:47:01.321111   70152 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:47:01.321133   70152 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:47:01.321182   70152 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:47:01.330754   70152 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:47:01.331880   70152 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-510301" does not appear in /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:47:01.332435   70152 kubeconfig.go:62] /home/jenkins/minikube-integration/19700-3751/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-510301" cluster setting kubeconfig missing "old-k8s-version-510301" context setting]
	I0924 19:47:01.333336   70152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:01.390049   70152 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:47:01.402246   70152 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.81
	I0924 19:47:01.402281   70152 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:47:01.402295   70152 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:47:01.402346   70152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:01.443778   70152 cri.go:89] found id: ""
	I0924 19:47:01.443851   70152 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:47:01.459836   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:47:01.469392   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:47:01.469414   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:47:01.469454   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:47:01.480329   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:47:01.480402   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:47:01.489799   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:46:58.478282   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:00.478523   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:02.478757   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:01.400039   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:02.984025   69904 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:02.984060   69904 pod_ready.go:82] duration metric: took 12.763830222s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:02.984074   69904 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:01.175244   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:01.175766   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:01.175794   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:01.175728   71297 retry.go:31] will retry after 2.109444702s: waiting for machine to come up
	I0924 19:47:03.288172   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:03.288747   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:03.288815   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:03.288726   71297 retry.go:31] will retry after 1.856538316s: waiting for machine to come up
	I0924 19:47:05.147261   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:05.147676   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:05.147705   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:05.147631   71297 retry.go:31] will retry after 3.46026185s: waiting for machine to come up
	I0924 19:47:01.499967   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:47:01.500023   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:47:01.508842   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:47:01.517564   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:47:01.517620   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:47:01.527204   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:47:01.536656   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:47:01.536718   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:47:01.546282   70152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:47:01.555548   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:01.755130   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.379331   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.601177   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.739476   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.829258   70152 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:47:02.829347   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:03.330254   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:03.830452   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.329738   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.829469   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:05.329754   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:05.830117   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:06.329834   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.978616   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:07.478201   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:04.990988   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:07.489888   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:08.610127   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:08.610582   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:08.610609   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:08.610530   71297 retry.go:31] will retry after 3.91954304s: waiting for machine to come up
	I0924 19:47:06.830043   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:07.330209   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:07.830432   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:08.329603   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:08.829525   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.330455   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.830130   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:10.329475   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:10.829474   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:11.330269   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.977113   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:11.977305   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:09.490038   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:11.490626   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:13.990603   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:12.534647   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.535213   69408 main.go:141] libmachine: (embed-certs-311319) Found IP for machine: 192.168.61.21
	I0924 19:47:12.535249   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has current primary IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.535259   69408 main.go:141] libmachine: (embed-certs-311319) Reserving static IP address...
	I0924 19:47:12.535700   69408 main.go:141] libmachine: (embed-certs-311319) Reserved static IP address: 192.168.61.21
	I0924 19:47:12.535744   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "embed-certs-311319", mac: "52:54:00:2d:97:73", ip: "192.168.61.21"} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.535759   69408 main.go:141] libmachine: (embed-certs-311319) Waiting for SSH to be available...
	I0924 19:47:12.535820   69408 main.go:141] libmachine: (embed-certs-311319) DBG | skip adding static IP to network mk-embed-certs-311319 - found existing host DHCP lease matching {name: "embed-certs-311319", mac: "52:54:00:2d:97:73", ip: "192.168.61.21"}
	I0924 19:47:12.535851   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Getting to WaitForSSH function...
	I0924 19:47:12.538011   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.538313   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.538336   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.538473   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Using SSH client type: external
	I0924 19:47:12.538500   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa (-rw-------)
	I0924 19:47:12.538538   69408 main.go:141] libmachine: (embed-certs-311319) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:47:12.538558   69408 main.go:141] libmachine: (embed-certs-311319) DBG | About to run SSH command:
	I0924 19:47:12.538634   69408 main.go:141] libmachine: (embed-certs-311319) DBG | exit 0
	I0924 19:47:12.662787   69408 main.go:141] libmachine: (embed-certs-311319) DBG | SSH cmd err, output: <nil>: 
	I0924 19:47:12.663130   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetConfigRaw
	I0924 19:47:12.663829   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:12.666266   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.666707   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.666734   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.666985   69408 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/config.json ...
	I0924 19:47:12.667187   69408 machine.go:93] provisionDockerMachine start ...
	I0924 19:47:12.667205   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:12.667397   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.669695   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.670024   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.670056   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.670152   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.670297   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.670460   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.670624   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.670793   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.671018   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.671033   69408 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:47:12.766763   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:47:12.766797   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.767074   69408 buildroot.go:166] provisioning hostname "embed-certs-311319"
	I0924 19:47:12.767103   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.767285   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.770003   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.770519   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.770538   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.770705   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.770934   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.771119   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.771237   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.771408   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.771554   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.771565   69408 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-311319 && echo "embed-certs-311319" | sudo tee /etc/hostname
	I0924 19:47:12.879608   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-311319
	
	I0924 19:47:12.879636   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.882136   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.882424   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.882467   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.882663   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.882866   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.883075   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.883235   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.883416   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.883583   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.883599   69408 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-311319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-311319/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-311319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:47:12.987554   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:47:12.987586   69408 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:47:12.987608   69408 buildroot.go:174] setting up certificates
	I0924 19:47:12.987618   69408 provision.go:84] configureAuth start
	I0924 19:47:12.987630   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.987918   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:12.990946   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.991378   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.991399   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.991554   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.993829   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.994193   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.994222   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.994349   69408 provision.go:143] copyHostCerts
	I0924 19:47:12.994410   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:47:12.994420   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:47:12.994478   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:47:12.994576   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:47:12.994586   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:47:12.994609   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:47:12.994663   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:47:12.994670   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:47:12.994689   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:47:12.994734   69408 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.embed-certs-311319 san=[127.0.0.1 192.168.61.21 embed-certs-311319 localhost minikube]
	I0924 19:47:13.255351   69408 provision.go:177] copyRemoteCerts
	I0924 19:47:13.255425   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:47:13.255452   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.257888   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.258200   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.258229   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.258359   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.258567   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.258746   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.258895   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.337835   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:47:13.360866   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 19:47:13.382703   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 19:47:13.404887   69408 provision.go:87] duration metric: took 417.256101ms to configureAuth
	I0924 19:47:13.404918   69408 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:47:13.405088   69408 config.go:182] Loaded profile config "embed-certs-311319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:47:13.405156   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.407711   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.408005   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.408024   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.408215   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.408408   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.408558   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.408660   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.408798   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:13.408960   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:13.408975   69408 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:47:13.623776   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:47:13.623798   69408 machine.go:96] duration metric: took 956.599003ms to provisionDockerMachine
	I0924 19:47:13.623809   69408 start.go:293] postStartSetup for "embed-certs-311319" (driver="kvm2")
	I0924 19:47:13.623818   69408 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:47:13.623833   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.624139   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:47:13.624168   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.627101   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.627443   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.627463   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.627613   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.627790   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.627941   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.628087   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.705595   69408 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:47:13.709401   69408 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:47:13.709432   69408 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:47:13.709507   69408 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:47:13.709597   69408 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:47:13.709717   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:47:13.718508   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:13.741537   69408 start.go:296] duration metric: took 117.71568ms for postStartSetup
	I0924 19:47:13.741586   69408 fix.go:56] duration metric: took 20.222309525s for fixHost
	I0924 19:47:13.741609   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.743935   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.744298   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.744319   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.744478   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.744665   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.744833   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.744950   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.745099   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:13.745299   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:13.745310   69408 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:47:13.847189   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207233.821269327
	
	I0924 19:47:13.847206   69408 fix.go:216] guest clock: 1727207233.821269327
	I0924 19:47:13.847213   69408 fix.go:229] Guest: 2024-09-24 19:47:13.821269327 +0000 UTC Remote: 2024-09-24 19:47:13.741591139 +0000 UTC m=+352.627485562 (delta=79.678188ms)
	I0924 19:47:13.847230   69408 fix.go:200] guest clock delta is within tolerance: 79.678188ms
	I0924 19:47:13.847236   69408 start.go:83] releasing machines lock for "embed-certs-311319", held for 20.328002727s
	I0924 19:47:13.847252   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.847550   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:13.850207   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.850597   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.850624   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.850777   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851225   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851382   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851459   69408 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:47:13.851520   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.851583   69408 ssh_runner.go:195] Run: cat /version.json
	I0924 19:47:13.851606   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.854077   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854214   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854354   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.854378   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854508   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.854615   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.854646   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854666   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.854852   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.854855   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.855020   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.855030   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.855168   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.855279   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.927108   69408 ssh_runner.go:195] Run: systemctl --version
	I0924 19:47:13.948600   69408 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:47:14.091427   69408 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:47:14.097911   69408 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:47:14.097970   69408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:47:14.113345   69408 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:47:14.113367   69408 start.go:495] detecting cgroup driver to use...
	I0924 19:47:14.113418   69408 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:47:14.129953   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:47:14.143732   69408 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:47:14.143792   69408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:47:14.156986   69408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:47:14.170235   69408 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:47:14.280973   69408 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:47:14.431584   69408 docker.go:233] disabling docker service ...
	I0924 19:47:14.431652   69408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:47:14.447042   69408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:47:14.458811   69408 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:47:14.571325   69408 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:47:14.685951   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:47:14.698947   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:47:14.716153   69408 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:47:14.716210   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.725659   69408 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:47:14.725711   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.734814   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.744087   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.753666   69408 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:47:14.763166   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.772502   69408 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.787890   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.797483   69408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:47:14.805769   69408 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:47:14.805822   69408 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:47:14.817290   69408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:47:14.827023   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:14.954141   69408 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:47:15.033256   69408 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:47:15.033336   69408 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:47:15.038070   69408 start.go:563] Will wait 60s for crictl version
	I0924 19:47:15.038118   69408 ssh_runner.go:195] Run: which crictl
	I0924 19:47:15.041588   69408 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:47:15.081812   69408 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:47:15.081922   69408 ssh_runner.go:195] Run: crio --version
	I0924 19:47:15.108570   69408 ssh_runner.go:195] Run: crio --version
	I0924 19:47:15.137432   69408 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:47:15.138786   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:15.141328   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:15.141693   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:15.141723   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:15.141867   69408 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0924 19:47:15.145512   69408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:15.156995   69408 kubeadm.go:883] updating cluster {Name:embed-certs-311319 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:47:15.157095   69408 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:47:15.157142   69408 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:47:15.189861   69408 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:47:15.189919   69408 ssh_runner.go:195] Run: which lz4
	I0924 19:47:15.193364   69408 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:47:15.196961   69408 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:47:15.196986   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 19:47:11.830448   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:12.330373   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:12.830050   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.329571   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.829489   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:14.329728   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:14.829674   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:15.329673   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:15.829570   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:16.330102   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.978164   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:15.978363   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:15.990970   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:18.491272   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:16.371583   69408 crio.go:462] duration metric: took 1.178253814s to copy over tarball
	I0924 19:47:16.371663   69408 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:47:18.358246   69408 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.986557839s)
	I0924 19:47:18.358276   69408 crio.go:469] duration metric: took 1.986666343s to extract the tarball
	I0924 19:47:18.358285   69408 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:47:18.393855   69408 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:47:18.442985   69408 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 19:47:18.443011   69408 cache_images.go:84] Images are preloaded, skipping loading
	I0924 19:47:18.443020   69408 kubeadm.go:934] updating node { 192.168.61.21 8443 v1.31.1 crio true true} ...
	I0924 19:47:18.443144   69408 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-311319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:47:18.443225   69408 ssh_runner.go:195] Run: crio config
	I0924 19:47:18.495010   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:47:18.495034   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:18.495045   69408 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:47:18.495071   69408 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.21 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-311319 NodeName:embed-certs-311319 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:47:18.495201   69408 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-311319"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:47:18.495259   69408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:47:18.504758   69408 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:47:18.504837   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:47:18.513817   69408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 19:47:18.529890   69408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:47:18.545915   69408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0924 19:47:18.561627   69408 ssh_runner.go:195] Run: grep 192.168.61.21	control-plane.minikube.internal$ /etc/hosts
	I0924 19:47:18.565041   69408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:18.576059   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:18.686482   69408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:47:18.703044   69408 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319 for IP: 192.168.61.21
	I0924 19:47:18.703074   69408 certs.go:194] generating shared ca certs ...
	I0924 19:47:18.703095   69408 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:18.703278   69408 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:47:18.703317   69408 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:47:18.703327   69408 certs.go:256] generating profile certs ...
	I0924 19:47:18.703417   69408 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/client.key
	I0924 19:47:18.703477   69408 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.key.8f14491f
	I0924 19:47:18.703510   69408 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.key
	I0924 19:47:18.703649   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:47:18.703703   69408 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:47:18.703715   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:47:18.703740   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:47:18.703771   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:47:18.703803   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:47:18.703843   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:18.704668   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:47:18.731187   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:47:18.762416   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:47:18.793841   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:47:18.822091   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0924 19:47:18.854506   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 19:47:18.880416   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:47:18.903863   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 19:47:18.926078   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:47:18.947455   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:47:18.968237   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:47:18.990346   69408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:47:19.006286   69408 ssh_runner.go:195] Run: openssl version
	I0924 19:47:19.011968   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:47:19.021631   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.025859   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.025914   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.030999   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:47:19.041265   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:47:19.050994   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.054763   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.054810   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.059873   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:47:19.069694   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:47:19.079194   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.083185   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.083236   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.088369   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:47:19.098719   69408 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:47:19.102935   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:47:19.108364   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:47:19.113724   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:47:19.119556   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:47:19.125014   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:47:19.130466   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:47:19.135718   69408 kubeadm.go:392] StartCluster: {Name:embed-certs-311319 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:47:19.135786   69408 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:47:19.135826   69408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:19.171585   69408 cri.go:89] found id: ""
	I0924 19:47:19.171664   69408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:47:19.181296   69408 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:47:19.181315   69408 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:47:19.181363   69408 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:47:19.191113   69408 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:47:19.192148   69408 kubeconfig.go:125] found "embed-certs-311319" server: "https://192.168.61.21:8443"
	I0924 19:47:19.194115   69408 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:47:19.203274   69408 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.21
	I0924 19:47:19.203308   69408 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:47:19.203319   69408 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:47:19.203372   69408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:19.249594   69408 cri.go:89] found id: ""
	I0924 19:47:19.249678   69408 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:47:19.268296   69408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:47:19.277151   69408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:47:19.277169   69408 kubeadm.go:157] found existing configuration files:
	
	I0924 19:47:19.277206   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:47:19.285488   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:47:19.285550   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:47:19.294995   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:47:19.303613   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:47:19.303669   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:47:19.312919   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:47:19.321717   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:47:19.321778   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:47:19.330321   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:47:19.342441   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:47:19.342497   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:47:19.352505   69408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:47:19.361457   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:19.463310   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.242073   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.431443   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.500079   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.575802   69408 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:47:20.575904   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:21.076353   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:16.829867   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.329440   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.830132   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:18.329512   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:18.829524   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:19.329716   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:19.829496   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:20.329702   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:20.830155   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:21.330292   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.979442   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:20.478202   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:22.478336   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:20.491568   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:22.991057   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:21.576940   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.076696   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.576235   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.594920   69408 api_server.go:72] duration metric: took 2.019101558s to wait for apiserver process to appear ...
	I0924 19:47:22.594944   69408 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:47:22.594965   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:22.595379   69408 api_server.go:269] stopped: https://192.168.61.21:8443/healthz: Get "https://192.168.61.21:8443/healthz": dial tcp 192.168.61.21:8443: connect: connection refused
	I0924 19:47:23.095005   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.467947   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:47:25.467974   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:47:25.467988   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.515819   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:47:25.515851   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:47:25.596001   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.602276   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:25.602314   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:26.095918   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:26.100666   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:26.100698   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:21.829987   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.329630   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.830041   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:23.330430   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:23.829696   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:24.329494   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:24.830212   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:25.330402   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:25.829827   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:26.329541   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:26.595784   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:26.601821   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:26.601861   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:27.095137   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:27.099164   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 200:
	ok
	I0924 19:47:27.106625   69408 api_server.go:141] control plane version: v1.31.1
	I0924 19:47:27.106652   69408 api_server.go:131] duration metric: took 4.511701512s to wait for apiserver health ...
	I0924 19:47:27.106661   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:47:27.106668   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:27.108430   69408 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:47:24.479088   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:26.978509   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:25.490325   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:27.990308   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:27.109830   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:47:27.119442   69408 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:47:27.139119   69408 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:47:27.150029   69408 system_pods.go:59] 8 kube-system pods found
	I0924 19:47:27.150060   69408 system_pods.go:61] "coredns-7c65d6cfc9-wwzps" [5d53dda1-bd41-40f4-8e01-e3808a6e17e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:47:27.150067   69408 system_pods.go:61] "etcd-embed-certs-311319" [899d3105-b565-4c9c-8b8e-fa524ba8bee8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:47:27.150076   69408 system_pods.go:61] "kube-apiserver-embed-certs-311319" [45909a95-dafd-436a-b1c9-4b16a7cb6ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:47:27.150083   69408 system_pods.go:61] "kube-controller-manager-embed-certs-311319" [e122c12d-8ad6-472d-9339-a9751a6108a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:47:27.150089   69408 system_pods.go:61] "kube-proxy-qk749" [ae8c6989-5de4-41bd-9098-1924532b7ff8] Running
	I0924 19:47:27.150094   69408 system_pods.go:61] "kube-scheduler-embed-certs-311319" [2f7427ff-479c-4f36-b27f-cfbf76e26201] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:47:27.150103   69408 system_pods.go:61] "metrics-server-6867b74b74-jfrhm" [b0e8ee4e-c2c6-4379-85ca-805cd3ce6371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:47:27.150107   69408 system_pods.go:61] "storage-provisioner" [b61b6e53-23ad-4cee-8eaa-8195dc6e67b8] Running
	I0924 19:47:27.150115   69408 system_pods.go:74] duration metric: took 10.980516ms to wait for pod list to return data ...
	I0924 19:47:27.150123   69408 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:47:27.154040   69408 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:47:27.154061   69408 node_conditions.go:123] node cpu capacity is 2
	I0924 19:47:27.154070   69408 node_conditions.go:105] duration metric: took 3.94208ms to run NodePressure ...
	I0924 19:47:27.154083   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:27.413841   69408 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:47:27.419186   69408 kubeadm.go:739] kubelet initialised
	I0924 19:47:27.419208   69408 kubeadm.go:740] duration metric: took 5.345194ms waiting for restarted kubelet to initialise ...
	I0924 19:47:27.419217   69408 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:47:27.424725   69408 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.429510   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.429529   69408 pod_ready.go:82] duration metric: took 4.780829ms for pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.429537   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.429542   69408 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.434176   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "etcd-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.434200   69408 pod_ready.go:82] duration metric: took 4.647781ms for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.434211   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "etcd-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.434218   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.438323   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.438352   69408 pod_ready.go:82] duration metric: took 4.121619ms for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.438365   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.438377   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.543006   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.543032   69408 pod_ready.go:82] duration metric: took 104.641326ms for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.543046   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.543053   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qk749" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.942331   69408 pod_ready.go:93] pod "kube-proxy-qk749" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:27.942351   69408 pod_ready.go:82] duration metric: took 399.288777ms for pod "kube-proxy-qk749" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.942360   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:29.955819   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:26.830122   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:27.329632   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:27.829858   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:28.329762   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:28.829476   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.330221   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.829642   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:30.329491   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:30.830098   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:31.329499   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.479174   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:31.979161   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:30.490043   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:32.490237   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:32.447718   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:34.948011   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:35.948500   69408 pod_ready.go:93] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:35.948525   69408 pod_ready.go:82] duration metric: took 8.006158098s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:35.948534   69408 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:31.830201   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:32.330017   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:32.829654   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:33.329718   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:33.830007   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.329683   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.829441   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:35.329848   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:35.829899   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:36.330437   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.478344   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.979370   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:34.490525   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.493495   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:38.990185   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:37.955025   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:39.958725   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.830372   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:37.330124   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:37.829745   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:38.329476   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:38.830138   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.329657   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.829850   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:40.330083   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:40.829903   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:41.329650   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.478317   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:41.978220   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:40.990288   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:42.990812   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:42.455130   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:44.954001   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:41.829413   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:42.329658   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:42.829718   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:43.330413   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:43.830374   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.329633   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.829479   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:45.330059   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:45.829818   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:46.330216   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.478335   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.977745   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:45.489604   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:47.490196   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.954193   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:48.955025   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.830337   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:47.330269   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:47.829573   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:48.329440   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:48.829923   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.329742   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.829771   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:50.329793   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:50.829379   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:51.329385   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.477310   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.977800   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:49.990388   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:52.490087   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.453967   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:53.454464   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:55.454863   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.829989   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:52.329456   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:52.830395   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:53.330348   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:53.829385   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.329667   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.830290   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:55.330430   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:55.829909   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:56.330041   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.477481   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.978407   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:54.490209   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.989867   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:58.990813   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:57.954303   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:00.454466   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.829842   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:57.329904   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:57.829402   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:58.329848   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:58.830403   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.330062   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.829904   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:00.329651   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:00.829451   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:01.330427   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.479270   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.978099   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.490292   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:03.490598   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:02.955021   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:05.455302   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.830104   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:02.330085   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:02.830241   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:02.830313   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:02.863389   70152 cri.go:89] found id: ""
	I0924 19:48:02.863421   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.863432   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:02.863440   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:02.863501   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:02.903587   70152 cri.go:89] found id: ""
	I0924 19:48:02.903615   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.903627   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:02.903634   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:02.903691   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:02.936090   70152 cri.go:89] found id: ""
	I0924 19:48:02.936117   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.936132   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:02.936138   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:02.936197   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:02.970010   70152 cri.go:89] found id: ""
	I0924 19:48:02.970034   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.970042   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:02.970047   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:02.970094   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:03.005123   70152 cri.go:89] found id: ""
	I0924 19:48:03.005146   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.005156   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:03.005164   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:03.005224   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:03.037142   70152 cri.go:89] found id: ""
	I0924 19:48:03.037185   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.037214   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:03.037223   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:03.037289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:03.071574   70152 cri.go:89] found id: ""
	I0924 19:48:03.071605   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.071616   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:03.071644   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:03.071710   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:03.101682   70152 cri.go:89] found id: ""
	I0924 19:48:03.101710   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.101718   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:03.101727   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:03.101737   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:03.145955   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:03.145982   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:03.194495   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:03.194531   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:03.207309   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:03.207344   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:03.318709   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:03.318736   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:03.318751   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:05.897472   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:05.910569   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:05.910633   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:05.972008   70152 cri.go:89] found id: ""
	I0924 19:48:05.972047   70152 logs.go:276] 0 containers: []
	W0924 19:48:05.972059   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:05.972066   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:05.972128   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:06.021928   70152 cri.go:89] found id: ""
	I0924 19:48:06.021954   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.021961   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:06.021967   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:06.022018   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:06.054871   70152 cri.go:89] found id: ""
	I0924 19:48:06.054910   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.054919   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:06.054924   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:06.054979   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:06.087218   70152 cri.go:89] found id: ""
	I0924 19:48:06.087242   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.087253   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:06.087261   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:06.087312   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:06.120137   70152 cri.go:89] found id: ""
	I0924 19:48:06.120162   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.120170   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:06.120176   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:06.120222   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:06.150804   70152 cri.go:89] found id: ""
	I0924 19:48:06.150842   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.150854   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:06.150862   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:06.150911   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:06.189829   70152 cri.go:89] found id: ""
	I0924 19:48:06.189856   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.189864   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:06.189870   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:06.189920   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:06.224712   70152 cri.go:89] found id: ""
	I0924 19:48:06.224739   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.224747   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:06.224755   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:06.224769   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:06.290644   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:06.290669   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:06.290681   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:06.369393   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:06.369427   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:06.404570   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:06.404601   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:06.456259   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:06.456288   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:04.478140   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:06.478544   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:05.991344   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:08.489768   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:07.954351   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:10.453427   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:08.969378   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:08.982058   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:08.982129   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:09.015453   70152 cri.go:89] found id: ""
	I0924 19:48:09.015475   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.015484   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:09.015489   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:09.015535   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:09.046308   70152 cri.go:89] found id: ""
	I0924 19:48:09.046332   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.046343   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:09.046350   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:09.046412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:09.077263   70152 cri.go:89] found id: ""
	I0924 19:48:09.077296   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.077308   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:09.077315   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:09.077373   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:09.109224   70152 cri.go:89] found id: ""
	I0924 19:48:09.109255   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.109267   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:09.109274   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:09.109342   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:09.144346   70152 cri.go:89] found id: ""
	I0924 19:48:09.144370   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.144378   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:09.144383   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:09.144434   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:09.175798   70152 cri.go:89] found id: ""
	I0924 19:48:09.175827   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.175843   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:09.175854   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:09.175923   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:09.211912   70152 cri.go:89] found id: ""
	I0924 19:48:09.211935   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.211942   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:09.211948   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:09.211996   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:09.242068   70152 cri.go:89] found id: ""
	I0924 19:48:09.242099   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.242110   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:09.242121   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:09.242134   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:09.306677   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:09.306696   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:09.306707   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:09.384544   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:09.384598   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:09.419555   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:09.419583   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:09.470699   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:09.470731   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:08.977847   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:11.477629   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:10.491124   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:12.990300   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:12.455219   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:14.455548   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:11.984355   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:11.997823   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:11.997879   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:12.029976   70152 cri.go:89] found id: ""
	I0924 19:48:12.030009   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.030021   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:12.030041   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:12.030187   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:12.061131   70152 cri.go:89] found id: ""
	I0924 19:48:12.061157   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.061165   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:12.061170   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:12.061223   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:12.091952   70152 cri.go:89] found id: ""
	I0924 19:48:12.091978   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.091986   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:12.091992   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:12.092039   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:12.127561   70152 cri.go:89] found id: ""
	I0924 19:48:12.127586   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.127597   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:12.127604   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:12.127688   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:12.157342   70152 cri.go:89] found id: ""
	I0924 19:48:12.157363   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.157371   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:12.157377   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:12.157449   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:12.188059   70152 cri.go:89] found id: ""
	I0924 19:48:12.188090   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.188101   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:12.188109   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:12.188163   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:12.222357   70152 cri.go:89] found id: ""
	I0924 19:48:12.222380   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.222388   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:12.222398   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:12.222456   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:12.252715   70152 cri.go:89] found id: ""
	I0924 19:48:12.252736   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.252743   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:12.252751   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:12.252761   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:12.302913   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:12.302943   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:12.315812   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:12.315840   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:12.392300   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:12.392322   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:12.392333   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:12.475042   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:12.475081   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:15.013852   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:15.026515   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:15.026586   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:15.057967   70152 cri.go:89] found id: ""
	I0924 19:48:15.057993   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.058001   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:15.058008   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:15.058063   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:15.092822   70152 cri.go:89] found id: ""
	I0924 19:48:15.092852   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.092860   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:15.092866   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:15.092914   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:15.127847   70152 cri.go:89] found id: ""
	I0924 19:48:15.127875   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.127884   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:15.127889   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:15.127941   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:15.159941   70152 cri.go:89] found id: ""
	I0924 19:48:15.159967   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.159975   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:15.159981   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:15.160035   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:15.192384   70152 cri.go:89] found id: ""
	I0924 19:48:15.192411   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.192422   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:15.192428   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:15.192481   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:15.225446   70152 cri.go:89] found id: ""
	I0924 19:48:15.225472   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.225482   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:15.225488   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:15.225546   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:15.257292   70152 cri.go:89] found id: ""
	I0924 19:48:15.257312   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.257320   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:15.257326   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:15.257377   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:15.288039   70152 cri.go:89] found id: ""
	I0924 19:48:15.288073   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.288085   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:15.288096   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:15.288110   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:15.300593   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:15.300619   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:15.365453   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:15.365482   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:15.365497   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:15.442405   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:15.442440   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:15.481003   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:15.481033   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:13.978638   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.477631   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:14.990464   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.991280   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.954405   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:18.955055   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:18.031802   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:18.044013   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:18.044070   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:18.076333   70152 cri.go:89] found id: ""
	I0924 19:48:18.076357   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.076365   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:18.076371   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:18.076421   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:18.110333   70152 cri.go:89] found id: ""
	I0924 19:48:18.110367   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.110379   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:18.110386   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:18.110457   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:18.142730   70152 cri.go:89] found id: ""
	I0924 19:48:18.142755   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.142763   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:18.142769   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:18.142848   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:18.174527   70152 cri.go:89] found id: ""
	I0924 19:48:18.174551   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.174561   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:18.174568   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:18.174623   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:18.213873   70152 cri.go:89] found id: ""
	I0924 19:48:18.213904   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.213916   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:18.213923   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:18.214019   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:18.247037   70152 cri.go:89] found id: ""
	I0924 19:48:18.247069   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.247079   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:18.247087   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:18.247167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:18.278275   70152 cri.go:89] found id: ""
	I0924 19:48:18.278302   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.278313   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:18.278319   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:18.278377   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:18.311651   70152 cri.go:89] found id: ""
	I0924 19:48:18.311679   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.311690   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:18.311702   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:18.311714   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:18.365113   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:18.365144   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:18.378675   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:18.378702   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:18.450306   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:18.450339   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:18.450353   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:18.529373   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:18.529420   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:21.065169   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:21.077517   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:21.077579   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:21.112639   70152 cri.go:89] found id: ""
	I0924 19:48:21.112663   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.112671   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:21.112677   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:21.112729   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:21.144587   70152 cri.go:89] found id: ""
	I0924 19:48:21.144608   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.144616   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:21.144625   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:21.144675   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:21.175675   70152 cri.go:89] found id: ""
	I0924 19:48:21.175697   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.175705   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:21.175710   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:21.175760   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:21.207022   70152 cri.go:89] found id: ""
	I0924 19:48:21.207044   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.207053   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:21.207058   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:21.207108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:21.238075   70152 cri.go:89] found id: ""
	I0924 19:48:21.238106   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.238118   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:21.238125   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:21.238188   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:21.269998   70152 cri.go:89] found id: ""
	I0924 19:48:21.270030   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.270040   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:21.270048   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:21.270108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:21.301274   70152 cri.go:89] found id: ""
	I0924 19:48:21.301303   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.301315   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:21.301323   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:21.301389   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:21.332082   70152 cri.go:89] found id: ""
	I0924 19:48:21.332107   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.332115   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:21.332123   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:21.332133   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:21.383713   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:21.383759   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:21.396926   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:21.396950   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:21.465280   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:21.465306   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:21.465321   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:18.477865   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:20.978484   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:19.491021   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.993922   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.454663   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:23.455041   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:25.954094   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.544724   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:21.544760   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:24.083632   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:24.095853   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:24.095909   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:24.126692   70152 cri.go:89] found id: ""
	I0924 19:48:24.126718   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.126732   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:24.126739   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:24.126794   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:24.157451   70152 cri.go:89] found id: ""
	I0924 19:48:24.157478   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.157490   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:24.157498   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:24.157548   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:24.188313   70152 cri.go:89] found id: ""
	I0924 19:48:24.188340   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.188351   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:24.188359   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:24.188406   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:24.218240   70152 cri.go:89] found id: ""
	I0924 19:48:24.218271   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.218283   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:24.218291   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:24.218348   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:24.249281   70152 cri.go:89] found id: ""
	I0924 19:48:24.249313   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.249324   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:24.249331   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:24.249391   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:24.280160   70152 cri.go:89] found id: ""
	I0924 19:48:24.280182   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.280189   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:24.280194   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:24.280246   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:24.310699   70152 cri.go:89] found id: ""
	I0924 19:48:24.310726   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.310735   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:24.310740   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:24.310792   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:24.346673   70152 cri.go:89] found id: ""
	I0924 19:48:24.346703   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.346715   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:24.346725   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:24.346738   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:24.396068   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:24.396100   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:24.408987   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:24.409014   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:24.477766   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:24.477792   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:24.477805   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:24.556507   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:24.556539   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:23.477283   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:25.477770   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.478124   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:24.491040   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:26.990109   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.954634   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:29.954918   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.099161   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:27.110953   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:27.111027   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:27.143812   70152 cri.go:89] found id: ""
	I0924 19:48:27.143838   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.143846   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:27.143852   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:27.143909   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:27.173741   70152 cri.go:89] found id: ""
	I0924 19:48:27.173766   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.173775   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:27.173780   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:27.173835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:27.203089   70152 cri.go:89] found id: ""
	I0924 19:48:27.203118   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.203128   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:27.203135   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:27.203197   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:27.234206   70152 cri.go:89] found id: ""
	I0924 19:48:27.234232   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.234240   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:27.234247   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:27.234298   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:27.265173   70152 cri.go:89] found id: ""
	I0924 19:48:27.265199   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.265207   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:27.265213   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:27.265274   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:27.294683   70152 cri.go:89] found id: ""
	I0924 19:48:27.294711   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.294722   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:27.294737   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:27.294800   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:27.327766   70152 cri.go:89] found id: ""
	I0924 19:48:27.327796   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.327804   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:27.327810   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:27.327867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:27.358896   70152 cri.go:89] found id: ""
	I0924 19:48:27.358922   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.358932   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:27.358943   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:27.358958   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:27.407245   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:27.407281   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:27.420301   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:27.420333   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:27.483150   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:27.483175   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:27.483190   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:27.558952   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:27.558988   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:30.094672   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:30.107997   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:30.108061   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:30.141210   70152 cri.go:89] found id: ""
	I0924 19:48:30.141238   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.141248   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:30.141256   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:30.141319   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:30.173799   70152 cri.go:89] found id: ""
	I0924 19:48:30.173825   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.173833   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:30.173839   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:30.173900   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:30.206653   70152 cri.go:89] found id: ""
	I0924 19:48:30.206676   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.206684   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:30.206690   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:30.206739   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:30.245268   70152 cri.go:89] found id: ""
	I0924 19:48:30.245296   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.245351   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:30.245363   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:30.245424   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:30.277515   70152 cri.go:89] found id: ""
	I0924 19:48:30.277550   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.277570   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:30.277578   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:30.277646   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:30.309533   70152 cri.go:89] found id: ""
	I0924 19:48:30.309556   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.309564   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:30.309576   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:30.309641   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:30.342113   70152 cri.go:89] found id: ""
	I0924 19:48:30.342133   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.342140   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:30.342146   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:30.342204   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:30.377786   70152 cri.go:89] found id: ""
	I0924 19:48:30.377818   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.377827   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:30.377835   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:30.377846   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:30.429612   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:30.429660   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:30.442864   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:30.442892   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:30.508899   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:30.508917   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:30.508928   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:30.585285   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:30.585316   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:29.978453   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:32.478565   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:29.489398   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:31.490231   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:33.490730   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:32.454775   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:34.455023   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:33.125617   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:33.137771   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:33.137847   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:33.169654   70152 cri.go:89] found id: ""
	I0924 19:48:33.169684   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.169694   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:33.169703   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:33.169769   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:33.205853   70152 cri.go:89] found id: ""
	I0924 19:48:33.205877   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.205884   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:33.205890   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:33.205947   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:33.239008   70152 cri.go:89] found id: ""
	I0924 19:48:33.239037   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.239048   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:33.239056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:33.239114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:33.269045   70152 cri.go:89] found id: ""
	I0924 19:48:33.269077   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.269088   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:33.269096   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:33.269158   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:33.298553   70152 cri.go:89] found id: ""
	I0924 19:48:33.298583   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.298594   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:33.298602   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:33.298663   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:33.329077   70152 cri.go:89] found id: ""
	I0924 19:48:33.329103   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.329114   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:33.329122   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:33.329181   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:33.361366   70152 cri.go:89] found id: ""
	I0924 19:48:33.361397   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.361408   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:33.361416   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:33.361465   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:33.394899   70152 cri.go:89] found id: ""
	I0924 19:48:33.394941   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.394952   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:33.394964   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:33.394978   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:33.446878   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:33.446917   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:33.460382   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:33.460408   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:33.530526   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:33.530546   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:33.530563   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:33.610520   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:33.610559   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:36.152137   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:36.165157   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:36.165225   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:36.196113   70152 cri.go:89] found id: ""
	I0924 19:48:36.196142   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.196151   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:36.196159   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:36.196223   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:36.230743   70152 cri.go:89] found id: ""
	I0924 19:48:36.230770   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.230779   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:36.230786   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:36.230870   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:36.263401   70152 cri.go:89] found id: ""
	I0924 19:48:36.263430   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.263439   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:36.263444   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:36.263492   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:36.298958   70152 cri.go:89] found id: ""
	I0924 19:48:36.298982   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.298991   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:36.298996   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:36.299053   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:36.337604   70152 cri.go:89] found id: ""
	I0924 19:48:36.337636   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.337647   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:36.337654   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:36.337717   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:36.368707   70152 cri.go:89] found id: ""
	I0924 19:48:36.368738   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.368749   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:36.368763   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:36.368833   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:36.400169   70152 cri.go:89] found id: ""
	I0924 19:48:36.400194   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.400204   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:36.400212   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:36.400277   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:36.430959   70152 cri.go:89] found id: ""
	I0924 19:48:36.430987   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.430994   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:36.431003   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:36.431015   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:48:34.478813   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:36.978477   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:35.991034   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:38.489705   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:36.954351   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:39.455405   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	W0924 19:48:36.508356   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:36.508381   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:36.508392   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:36.589376   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:36.589411   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:36.629423   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:36.629453   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:36.679281   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:36.679313   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:39.193627   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:39.207486   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:39.207564   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:39.239864   70152 cri.go:89] found id: ""
	I0924 19:48:39.239888   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.239897   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:39.239902   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:39.239950   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:39.273596   70152 cri.go:89] found id: ""
	I0924 19:48:39.273622   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.273630   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:39.273635   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:39.273685   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:39.305659   70152 cri.go:89] found id: ""
	I0924 19:48:39.305685   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.305696   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:39.305703   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:39.305762   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:39.338060   70152 cri.go:89] found id: ""
	I0924 19:48:39.338091   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.338103   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:39.338110   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:39.338167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:39.369652   70152 cri.go:89] found id: ""
	I0924 19:48:39.369680   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.369688   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:39.369694   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:39.369757   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:39.406342   70152 cri.go:89] found id: ""
	I0924 19:48:39.406365   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.406373   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:39.406379   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:39.406428   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:39.437801   70152 cri.go:89] found id: ""
	I0924 19:48:39.437824   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.437832   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:39.437838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:39.437892   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:39.476627   70152 cri.go:89] found id: ""
	I0924 19:48:39.476651   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.476662   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:39.476672   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:39.476685   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:39.528302   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:39.528332   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:39.540968   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:39.540999   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:39.606690   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:39.606716   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:39.606733   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:39.689060   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:39.689101   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:39.478198   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:41.478531   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:40.489969   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:42.491022   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:41.954586   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:44.454898   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:42.225445   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:42.238188   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:42.238262   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:42.270077   70152 cri.go:89] found id: ""
	I0924 19:48:42.270107   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.270117   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:42.270127   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:42.270189   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:42.301231   70152 cri.go:89] found id: ""
	I0924 19:48:42.301253   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.301261   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:42.301266   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:42.301311   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:42.331554   70152 cri.go:89] found id: ""
	I0924 19:48:42.331586   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.331594   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:42.331602   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:42.331662   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:42.364673   70152 cri.go:89] found id: ""
	I0924 19:48:42.364696   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.364704   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:42.364710   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:42.364755   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:42.396290   70152 cri.go:89] found id: ""
	I0924 19:48:42.396320   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.396331   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:42.396339   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:42.396400   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:42.427249   70152 cri.go:89] found id: ""
	I0924 19:48:42.427277   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.427287   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:42.427295   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:42.427356   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:42.462466   70152 cri.go:89] found id: ""
	I0924 19:48:42.462491   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.462499   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:42.462504   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:42.462557   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:42.496774   70152 cri.go:89] found id: ""
	I0924 19:48:42.496797   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.496805   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:42.496813   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:42.496825   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:42.569996   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:42.570024   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:42.570040   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:42.646881   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:42.646913   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:42.687089   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:42.687112   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:42.739266   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:42.739303   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:45.254320   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:45.266332   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:45.266404   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:45.296893   70152 cri.go:89] found id: ""
	I0924 19:48:45.296923   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.296933   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:45.296940   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:45.297003   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:45.328599   70152 cri.go:89] found id: ""
	I0924 19:48:45.328628   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.328639   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:45.328647   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:45.328704   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:45.361362   70152 cri.go:89] found id: ""
	I0924 19:48:45.361394   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.361404   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:45.361414   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:45.361475   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:45.395296   70152 cri.go:89] found id: ""
	I0924 19:48:45.395341   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.395352   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:45.395360   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:45.395424   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:45.430070   70152 cri.go:89] found id: ""
	I0924 19:48:45.430092   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.430100   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:45.430106   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:45.430151   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:45.463979   70152 cri.go:89] found id: ""
	I0924 19:48:45.464005   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.464015   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:45.464023   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:45.464085   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:45.512245   70152 cri.go:89] found id: ""
	I0924 19:48:45.512276   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.512286   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:45.512293   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:45.512353   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:45.544854   70152 cri.go:89] found id: ""
	I0924 19:48:45.544882   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.544891   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:45.544902   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:45.544915   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:45.580352   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:45.580390   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:45.630992   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:45.631025   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:45.643908   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:45.643936   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:45.715669   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:45.715689   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:45.715703   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:43.478814   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:45.978275   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:44.990088   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:46.990498   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:46.954696   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:49.455032   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:48.296204   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:48.308612   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:48.308675   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:48.339308   70152 cri.go:89] found id: ""
	I0924 19:48:48.339335   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.339345   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:48.339353   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:48.339412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:48.377248   70152 cri.go:89] found id: ""
	I0924 19:48:48.377277   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.377286   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:48.377292   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:48.377354   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:48.414199   70152 cri.go:89] found id: ""
	I0924 19:48:48.414230   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.414238   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:48.414244   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:48.414293   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:48.446262   70152 cri.go:89] found id: ""
	I0924 19:48:48.446291   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.446302   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:48.446309   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:48.446369   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:48.477125   70152 cri.go:89] found id: ""
	I0924 19:48:48.477155   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.477166   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:48.477174   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:48.477233   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:48.520836   70152 cri.go:89] found id: ""
	I0924 19:48:48.520867   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.520876   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:48.520881   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:48.520936   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:48.557787   70152 cri.go:89] found id: ""
	I0924 19:48:48.557818   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.557829   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:48.557838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:48.557897   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:48.589636   70152 cri.go:89] found id: ""
	I0924 19:48:48.589670   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.589682   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:48.589692   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:48.589706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:48.667455   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:48.667486   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:48.704523   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:48.704559   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:48.754194   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:48.754223   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:48.766550   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:48.766576   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:48.833394   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:51.333900   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:51.347028   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:51.347094   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:51.383250   70152 cri.go:89] found id: ""
	I0924 19:48:51.383277   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.383285   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:51.383292   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:51.383356   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:51.415238   70152 cri.go:89] found id: ""
	I0924 19:48:51.415269   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.415282   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:51.415289   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:51.415349   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:51.447358   70152 cri.go:89] found id: ""
	I0924 19:48:51.447388   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.447398   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:51.447407   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:51.447469   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:51.479317   70152 cri.go:89] found id: ""
	I0924 19:48:51.479345   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.479354   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:51.479362   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:51.479423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:48.477928   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:50.978108   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:49.491597   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.989509   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:53.989629   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.954573   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:54.455024   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.511976   70152 cri.go:89] found id: ""
	I0924 19:48:51.512008   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.512016   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:51.512022   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:51.512074   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:51.544785   70152 cri.go:89] found id: ""
	I0924 19:48:51.544816   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.544824   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:51.544834   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:51.544896   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:51.577475   70152 cri.go:89] found id: ""
	I0924 19:48:51.577508   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.577519   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:51.577527   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:51.577599   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:51.612499   70152 cri.go:89] found id: ""
	I0924 19:48:51.612529   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.612540   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:51.612551   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:51.612564   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:51.648429   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:51.648456   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:51.699980   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:51.700010   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:51.714695   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:51.714723   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:51.781872   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:51.781894   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:51.781909   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:54.361191   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:54.373189   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:54.373242   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:54.405816   70152 cri.go:89] found id: ""
	I0924 19:48:54.405844   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.405854   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:54.405862   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:54.405924   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:54.437907   70152 cri.go:89] found id: ""
	I0924 19:48:54.437935   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.437945   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:54.437952   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:54.438013   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:54.472020   70152 cri.go:89] found id: ""
	I0924 19:48:54.472042   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.472054   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:54.472061   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:54.472122   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:54.507185   70152 cri.go:89] found id: ""
	I0924 19:48:54.507206   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.507215   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:54.507220   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:54.507269   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:54.540854   70152 cri.go:89] found id: ""
	I0924 19:48:54.540887   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.540898   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:54.540905   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:54.540973   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:54.572764   70152 cri.go:89] found id: ""
	I0924 19:48:54.572805   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.572816   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:54.572824   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:54.572897   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:54.605525   70152 cri.go:89] found id: ""
	I0924 19:48:54.605565   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.605573   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:54.605579   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:54.605652   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:54.637320   70152 cri.go:89] found id: ""
	I0924 19:48:54.637341   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.637350   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:54.637357   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:54.637367   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:54.691398   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:54.691433   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:54.704780   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:54.704805   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:54.779461   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:54.779487   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:54.779502   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:54.858131   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:54.858168   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:52.978487   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:55.477749   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:57.479091   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:55.989883   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:58.490132   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:56.954088   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:58.954576   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:00.955423   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:57.393677   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:57.406202   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:57.406273   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:57.439351   70152 cri.go:89] found id: ""
	I0924 19:48:57.439381   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.439388   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:57.439394   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:57.439440   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:57.476966   70152 cri.go:89] found id: ""
	I0924 19:48:57.476993   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.477002   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:57.477007   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:57.477064   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:57.510947   70152 cri.go:89] found id: ""
	I0924 19:48:57.510975   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.510986   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:57.510994   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:57.511054   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:57.544252   70152 cri.go:89] found id: ""
	I0924 19:48:57.544277   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.544285   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:57.544292   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:57.544342   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:57.576781   70152 cri.go:89] found id: ""
	I0924 19:48:57.576810   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.576821   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:57.576829   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:57.576892   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:57.614243   70152 cri.go:89] found id: ""
	I0924 19:48:57.614269   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.614277   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:57.614283   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:57.614349   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:57.653477   70152 cri.go:89] found id: ""
	I0924 19:48:57.653506   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.653517   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:57.653524   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:57.653598   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:57.701253   70152 cri.go:89] found id: ""
	I0924 19:48:57.701283   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.701291   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:57.701299   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:57.701311   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:57.721210   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:57.721239   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:57.799693   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:57.799720   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:57.799735   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:57.881561   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:57.881597   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:57.917473   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:57.917506   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:00.471475   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:00.485727   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:00.485801   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:00.518443   70152 cri.go:89] found id: ""
	I0924 19:49:00.518472   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.518483   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:00.518490   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:00.518555   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:00.553964   70152 cri.go:89] found id: ""
	I0924 19:49:00.553991   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.554001   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:00.554009   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:00.554074   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:00.585507   70152 cri.go:89] found id: ""
	I0924 19:49:00.585537   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.585548   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:00.585555   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:00.585614   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:00.618214   70152 cri.go:89] found id: ""
	I0924 19:49:00.618242   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.618253   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:00.618260   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:00.618319   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:00.649042   70152 cri.go:89] found id: ""
	I0924 19:49:00.649069   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.649077   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:00.649083   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:00.649133   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:00.681021   70152 cri.go:89] found id: ""
	I0924 19:49:00.681050   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.681060   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:00.681067   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:00.681128   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:00.712608   70152 cri.go:89] found id: ""
	I0924 19:49:00.712631   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.712640   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:00.712646   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:00.712693   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:00.744523   70152 cri.go:89] found id: ""
	I0924 19:49:00.744561   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.744572   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:00.744584   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:00.744604   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:00.757179   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:00.757202   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:00.822163   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:00.822186   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:00.822197   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:00.897080   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:00.897125   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:00.934120   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:00.934149   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:59.977468   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:01.978394   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:00.491533   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:02.990346   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:03.454971   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:05.954492   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:03.487555   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:03.500318   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:03.500372   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:03.531327   70152 cri.go:89] found id: ""
	I0924 19:49:03.531355   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.531364   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:03.531372   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:03.531437   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:03.563445   70152 cri.go:89] found id: ""
	I0924 19:49:03.563480   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.563491   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:03.563498   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:03.563564   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:03.602093   70152 cri.go:89] found id: ""
	I0924 19:49:03.602118   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.602126   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:03.602134   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:03.602184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:03.633729   70152 cri.go:89] found id: ""
	I0924 19:49:03.633758   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.633769   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:03.633777   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:03.633838   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:03.664122   70152 cri.go:89] found id: ""
	I0924 19:49:03.664144   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.664154   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:03.664162   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:03.664227   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:03.697619   70152 cri.go:89] found id: ""
	I0924 19:49:03.697647   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.697656   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:03.697661   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:03.697714   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:03.729679   70152 cri.go:89] found id: ""
	I0924 19:49:03.729706   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.729714   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:03.729719   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:03.729768   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:03.760459   70152 cri.go:89] found id: ""
	I0924 19:49:03.760489   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.760497   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:03.760505   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:03.760517   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:03.772452   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:03.772475   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:03.836658   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:03.836690   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:03.836706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:03.911243   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:03.911274   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:03.947676   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:03.947699   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:04.478117   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:06.977766   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:04.992137   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:07.490741   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:07.955747   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:10.453756   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:06.501947   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:06.513963   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:06.514037   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:06.546355   70152 cri.go:89] found id: ""
	I0924 19:49:06.546382   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.546393   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:06.546401   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:06.546460   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:06.577502   70152 cri.go:89] found id: ""
	I0924 19:49:06.577530   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.577542   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:06.577554   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:06.577606   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:06.611622   70152 cri.go:89] found id: ""
	I0924 19:49:06.611644   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.611652   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:06.611658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:06.611716   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:06.646558   70152 cri.go:89] found id: ""
	I0924 19:49:06.646581   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.646589   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:06.646594   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:06.646656   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:06.678247   70152 cri.go:89] found id: ""
	I0924 19:49:06.678271   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.678282   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:06.678289   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:06.678351   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:06.718816   70152 cri.go:89] found id: ""
	I0924 19:49:06.718861   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.718874   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:06.718889   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:06.718952   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:06.751762   70152 cri.go:89] found id: ""
	I0924 19:49:06.751787   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.751798   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:06.751806   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:06.751867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:06.783466   70152 cri.go:89] found id: ""
	I0924 19:49:06.783494   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.783502   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:06.783511   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:06.783523   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:06.796746   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:06.796773   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:06.860579   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:06.860608   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:06.860627   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:06.933363   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:06.933394   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:06.973189   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:06.973214   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:09.525823   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:09.537933   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:09.537986   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:09.568463   70152 cri.go:89] found id: ""
	I0924 19:49:09.568492   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.568503   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:09.568511   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:09.568566   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:09.598218   70152 cri.go:89] found id: ""
	I0924 19:49:09.598250   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.598261   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:09.598268   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:09.598325   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:09.631792   70152 cri.go:89] found id: ""
	I0924 19:49:09.631817   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.631828   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:09.631839   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:09.631906   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:09.668544   70152 cri.go:89] found id: ""
	I0924 19:49:09.668578   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.668586   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:09.668592   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:09.668643   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:09.699088   70152 cri.go:89] found id: ""
	I0924 19:49:09.699117   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.699126   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:09.699132   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:09.699192   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:09.731239   70152 cri.go:89] found id: ""
	I0924 19:49:09.731262   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.731273   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:09.731280   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:09.731341   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:09.764349   70152 cri.go:89] found id: ""
	I0924 19:49:09.764372   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.764380   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:09.764386   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:09.764443   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:09.795675   70152 cri.go:89] found id: ""
	I0924 19:49:09.795698   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.795707   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:09.795715   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:09.795733   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:09.829109   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:09.829133   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:09.882630   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:09.882666   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:09.894968   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:09.894992   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:09.955378   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:09.955400   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:09.955415   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:09.477323   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:11.477732   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:09.991122   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.490229   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.453790   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:14.454415   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.537431   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:12.549816   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:12.549878   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:12.585422   70152 cri.go:89] found id: ""
	I0924 19:49:12.585445   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.585453   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:12.585459   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:12.585505   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:12.621367   70152 cri.go:89] found id: ""
	I0924 19:49:12.621391   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.621401   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:12.621408   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:12.621471   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:12.656570   70152 cri.go:89] found id: ""
	I0924 19:49:12.656596   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.656603   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:12.656611   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:12.656671   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:12.691193   70152 cri.go:89] found id: ""
	I0924 19:49:12.691215   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.691225   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:12.691233   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:12.691291   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:12.725507   70152 cri.go:89] found id: ""
	I0924 19:49:12.725535   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.725546   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:12.725554   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:12.725614   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:12.757046   70152 cri.go:89] found id: ""
	I0924 19:49:12.757072   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.757083   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:12.757091   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:12.757148   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:12.787049   70152 cri.go:89] found id: ""
	I0924 19:49:12.787075   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.787083   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:12.787088   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:12.787136   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:12.820797   70152 cri.go:89] found id: ""
	I0924 19:49:12.820823   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.820831   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:12.820841   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:12.820859   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:12.873430   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:12.873462   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:12.886207   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:12.886234   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:12.957602   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:12.957623   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:12.957637   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:13.034776   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:13.034811   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:15.571177   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:15.583916   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:15.583981   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:15.618698   70152 cri.go:89] found id: ""
	I0924 19:49:15.618722   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.618730   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:15.618735   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:15.618787   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:15.653693   70152 cri.go:89] found id: ""
	I0924 19:49:15.653726   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.653747   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:15.653755   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:15.653817   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:15.683926   70152 cri.go:89] found id: ""
	I0924 19:49:15.683955   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.683966   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:15.683974   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:15.684031   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:15.718671   70152 cri.go:89] found id: ""
	I0924 19:49:15.718704   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.718716   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:15.718724   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:15.718784   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:15.748861   70152 cri.go:89] found id: ""
	I0924 19:49:15.748892   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.748904   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:15.748911   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:15.748985   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:15.778209   70152 cri.go:89] found id: ""
	I0924 19:49:15.778241   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.778252   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:15.778259   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:15.778323   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:15.808159   70152 cri.go:89] found id: ""
	I0924 19:49:15.808184   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.808192   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:15.808197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:15.808257   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:15.840960   70152 cri.go:89] found id: ""
	I0924 19:49:15.840987   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.840995   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:15.841003   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:15.841016   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:15.891229   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:15.891259   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:15.903910   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:15.903935   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:15.967036   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:15.967061   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:15.967074   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:16.046511   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:16.046545   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:13.477971   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:15.478378   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:14.990141   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:16.990237   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.990750   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:16.954729   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.954769   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.586369   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:18.598590   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:18.598680   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:18.631438   70152 cri.go:89] found id: ""
	I0924 19:49:18.631465   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.631476   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:18.631484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:18.631545   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:18.663461   70152 cri.go:89] found id: ""
	I0924 19:49:18.663484   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.663491   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:18.663497   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:18.663556   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:18.696292   70152 cri.go:89] found id: ""
	I0924 19:49:18.696373   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.696398   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:18.696411   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:18.696475   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:18.728037   70152 cri.go:89] found id: ""
	I0924 19:49:18.728062   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.728073   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:18.728079   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:18.728139   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:18.759784   70152 cri.go:89] found id: ""
	I0924 19:49:18.759819   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.759830   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:18.759838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:18.759902   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:18.791856   70152 cri.go:89] found id: ""
	I0924 19:49:18.791886   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.791893   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:18.791899   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:18.791959   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:18.822678   70152 cri.go:89] found id: ""
	I0924 19:49:18.822708   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.822719   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:18.822730   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:18.822794   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:18.852967   70152 cri.go:89] found id: ""
	I0924 19:49:18.852988   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.852996   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:18.853005   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:18.853016   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:18.902600   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:18.902634   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:18.915475   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:18.915505   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:18.980260   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:18.980285   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:18.980299   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:19.064950   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:19.064986   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:17.977250   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:19.977563   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.977702   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.490563   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:23.989915   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.454031   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:23.954281   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:25.955057   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.603752   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:21.616039   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:21.616107   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:21.648228   70152 cri.go:89] found id: ""
	I0924 19:49:21.648253   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.648266   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:21.648274   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:21.648331   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:21.679823   70152 cri.go:89] found id: ""
	I0924 19:49:21.679850   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.679858   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:21.679866   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:21.679928   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:21.712860   70152 cri.go:89] found id: ""
	I0924 19:49:21.712886   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.712895   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:21.712900   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:21.712951   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:21.749711   70152 cri.go:89] found id: ""
	I0924 19:49:21.749735   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.749742   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:21.749748   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:21.749793   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:21.784536   70152 cri.go:89] found id: ""
	I0924 19:49:21.784559   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.784567   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:21.784573   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:21.784631   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:21.813864   70152 cri.go:89] found id: ""
	I0924 19:49:21.813896   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.813907   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:21.813916   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:21.813981   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:21.843610   70152 cri.go:89] found id: ""
	I0924 19:49:21.843639   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.843647   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:21.843653   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:21.843704   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:21.874367   70152 cri.go:89] found id: ""
	I0924 19:49:21.874393   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.874401   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:21.874410   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:21.874421   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:21.923539   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:21.923567   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:21.936994   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:21.937018   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:22.004243   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:22.004264   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:22.004277   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:22.079890   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:22.079921   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:24.616140   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:24.628197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:24.628257   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:24.660873   70152 cri.go:89] found id: ""
	I0924 19:49:24.660902   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.660912   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:24.660919   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:24.660978   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:24.691592   70152 cri.go:89] found id: ""
	I0924 19:49:24.691618   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.691627   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:24.691633   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:24.691682   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:24.725803   70152 cri.go:89] found id: ""
	I0924 19:49:24.725835   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.725843   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:24.725849   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:24.725911   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:24.760080   70152 cri.go:89] found id: ""
	I0924 19:49:24.760112   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.760124   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:24.760131   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:24.760198   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:24.792487   70152 cri.go:89] found id: ""
	I0924 19:49:24.792517   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.792527   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:24.792535   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:24.792615   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:24.825037   70152 cri.go:89] found id: ""
	I0924 19:49:24.825058   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.825066   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:24.825072   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:24.825117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:24.857009   70152 cri.go:89] found id: ""
	I0924 19:49:24.857037   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.857048   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:24.857062   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:24.857119   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:24.887963   70152 cri.go:89] found id: ""
	I0924 19:49:24.887986   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.887994   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:24.888001   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:24.888012   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:24.941971   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:24.942008   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:24.956355   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:24.956385   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:25.020643   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:25.020671   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:25.020686   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:25.095261   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:25.095295   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:24.477423   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:26.477967   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:25.990406   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:28.490276   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:28.454466   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.955002   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:27.632228   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:27.645002   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:27.645059   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:27.677386   70152 cri.go:89] found id: ""
	I0924 19:49:27.677411   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.677420   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:27.677427   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:27.677487   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:27.709731   70152 cri.go:89] found id: ""
	I0924 19:49:27.709760   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.709771   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:27.709779   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:27.709846   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:27.741065   70152 cri.go:89] found id: ""
	I0924 19:49:27.741092   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.741100   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:27.741106   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:27.741165   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:27.771493   70152 cri.go:89] found id: ""
	I0924 19:49:27.771515   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.771524   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:27.771531   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:27.771592   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:27.803233   70152 cri.go:89] found id: ""
	I0924 19:49:27.803266   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.803273   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:27.803279   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:27.803341   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:27.837295   70152 cri.go:89] found id: ""
	I0924 19:49:27.837320   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.837331   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:27.837341   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:27.837412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:27.867289   70152 cri.go:89] found id: ""
	I0924 19:49:27.867314   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.867323   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:27.867328   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:27.867374   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:27.896590   70152 cri.go:89] found id: ""
	I0924 19:49:27.896615   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.896623   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:27.896634   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:27.896646   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:27.944564   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:27.944596   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:27.958719   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:27.958740   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:28.028986   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:28.029011   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:28.029027   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:28.103888   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:28.103920   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:30.639148   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:30.651500   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:30.651570   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:30.689449   70152 cri.go:89] found id: ""
	I0924 19:49:30.689472   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.689481   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:30.689488   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:30.689566   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:30.722953   70152 cri.go:89] found id: ""
	I0924 19:49:30.722982   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.722993   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:30.723004   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:30.723057   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:30.760960   70152 cri.go:89] found id: ""
	I0924 19:49:30.760985   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.760996   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:30.761004   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:30.761066   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:30.794784   70152 cri.go:89] found id: ""
	I0924 19:49:30.794812   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.794821   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:30.794842   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:30.794894   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:30.826127   70152 cri.go:89] found id: ""
	I0924 19:49:30.826155   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.826164   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:30.826172   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:30.826235   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:30.857392   70152 cri.go:89] found id: ""
	I0924 19:49:30.857422   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.857432   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:30.857446   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:30.857510   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:30.887561   70152 cri.go:89] found id: ""
	I0924 19:49:30.887588   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.887600   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:30.887622   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:30.887692   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:30.922486   70152 cri.go:89] found id: ""
	I0924 19:49:30.922514   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.922526   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:30.922537   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:30.922551   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:30.972454   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:30.972480   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:30.986873   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:30.986895   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:31.060505   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:31.060525   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:31.060544   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:31.138923   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:31.138955   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:28.977756   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.980419   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.989909   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:32.991815   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:33.454204   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.454890   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:33.674979   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:33.687073   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:33.687149   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:33.719712   70152 cri.go:89] found id: ""
	I0924 19:49:33.719742   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.719751   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:33.719757   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:33.719810   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:33.751183   70152 cri.go:89] found id: ""
	I0924 19:49:33.751210   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.751221   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:33.751229   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:33.751274   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:33.781748   70152 cri.go:89] found id: ""
	I0924 19:49:33.781781   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.781793   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:33.781801   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:33.781873   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:33.813287   70152 cri.go:89] found id: ""
	I0924 19:49:33.813311   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.813319   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:33.813324   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:33.813369   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:33.848270   70152 cri.go:89] found id: ""
	I0924 19:49:33.848299   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.848311   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:33.848319   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:33.848383   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:33.877790   70152 cri.go:89] found id: ""
	I0924 19:49:33.877817   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.877826   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:33.877832   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:33.877890   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:33.911668   70152 cri.go:89] found id: ""
	I0924 19:49:33.911693   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.911701   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:33.911706   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:33.911759   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:33.943924   70152 cri.go:89] found id: ""
	I0924 19:49:33.943952   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.943963   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:33.943974   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:33.943987   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:33.980520   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:33.980560   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:34.031240   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:34.031275   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:34.044180   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:34.044210   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:34.110143   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:34.110165   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:34.110176   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:33.477340   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.478344   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.490449   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:37.989317   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:37.954444   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:39.954569   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:36.694093   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:36.706006   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:36.706080   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:36.738955   70152 cri.go:89] found id: ""
	I0924 19:49:36.738981   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.738990   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:36.738995   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:36.739059   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:36.774414   70152 cri.go:89] found id: ""
	I0924 19:49:36.774437   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.774445   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:36.774451   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:36.774503   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:36.805821   70152 cri.go:89] found id: ""
	I0924 19:49:36.805851   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.805861   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:36.805867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:36.805922   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:36.835128   70152 cri.go:89] found id: ""
	I0924 19:49:36.835154   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.835162   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:36.835168   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:36.835221   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:36.865448   70152 cri.go:89] found id: ""
	I0924 19:49:36.865474   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.865485   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:36.865492   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:36.865552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:36.896694   70152 cri.go:89] found id: ""
	I0924 19:49:36.896722   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.896731   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:36.896736   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:36.896801   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:36.927380   70152 cri.go:89] found id: ""
	I0924 19:49:36.927406   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.927416   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:36.927426   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:36.927484   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:36.957581   70152 cri.go:89] found id: ""
	I0924 19:49:36.957604   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.957614   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:36.957624   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:36.957638   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:37.007182   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:37.007211   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:37.021536   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:37.021561   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:37.092442   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:37.092465   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:37.092477   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:37.167488   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:37.167524   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:39.703778   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:39.715914   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:39.715983   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:39.751296   70152 cri.go:89] found id: ""
	I0924 19:49:39.751319   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.751329   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:39.751341   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:39.751409   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:39.787095   70152 cri.go:89] found id: ""
	I0924 19:49:39.787123   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.787132   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:39.787137   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:39.787184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:39.822142   70152 cri.go:89] found id: ""
	I0924 19:49:39.822164   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.822173   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:39.822179   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:39.822226   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:39.853830   70152 cri.go:89] found id: ""
	I0924 19:49:39.853854   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.853864   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:39.853871   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:39.853932   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:39.891029   70152 cri.go:89] found id: ""
	I0924 19:49:39.891079   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.891091   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:39.891100   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:39.891162   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:39.926162   70152 cri.go:89] found id: ""
	I0924 19:49:39.926194   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.926204   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:39.926211   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:39.926262   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:39.964320   70152 cri.go:89] found id: ""
	I0924 19:49:39.964348   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.964358   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:39.964365   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:39.964421   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:39.997596   70152 cri.go:89] found id: ""
	I0924 19:49:39.997617   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.997627   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:39.997636   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:39.997649   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:40.045538   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:40.045568   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:40.058114   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:40.058139   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:40.125927   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:40.125946   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:40.125958   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:40.202722   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:40.202758   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:37.978393   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:40.476855   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.477425   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:39.990444   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:41.991094   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.454568   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:44.953805   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.742707   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:42.754910   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:42.754986   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:42.788775   70152 cri.go:89] found id: ""
	I0924 19:49:42.788798   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.788807   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:42.788813   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:42.788875   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:42.824396   70152 cri.go:89] found id: ""
	I0924 19:49:42.824420   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.824430   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:42.824436   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:42.824498   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:42.854848   70152 cri.go:89] found id: ""
	I0924 19:49:42.854873   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.854880   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:42.854886   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:42.854936   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:42.885033   70152 cri.go:89] found id: ""
	I0924 19:49:42.885056   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.885063   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:42.885069   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:42.885114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:42.914427   70152 cri.go:89] found id: ""
	I0924 19:49:42.914451   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.914458   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:42.914464   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:42.914509   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:42.954444   70152 cri.go:89] found id: ""
	I0924 19:49:42.954471   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.954481   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:42.954488   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:42.954544   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:42.998183   70152 cri.go:89] found id: ""
	I0924 19:49:42.998207   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.998215   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:42.998220   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:42.998273   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:43.041904   70152 cri.go:89] found id: ""
	I0924 19:49:43.041933   70152 logs.go:276] 0 containers: []
	W0924 19:49:43.041944   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:43.041957   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:43.041973   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:43.091733   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:43.091770   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:43.104674   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:43.104707   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:43.169712   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:43.169732   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:43.169745   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:43.248378   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:43.248409   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:45.790015   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:45.801902   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:45.801972   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:45.833030   70152 cri.go:89] found id: ""
	I0924 19:49:45.833053   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.833061   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:45.833066   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:45.833117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:45.863209   70152 cri.go:89] found id: ""
	I0924 19:49:45.863233   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.863241   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:45.863247   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:45.863307   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:45.893004   70152 cri.go:89] found id: ""
	I0924 19:49:45.893035   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.893045   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:45.893053   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:45.893114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:45.924485   70152 cri.go:89] found id: ""
	I0924 19:49:45.924515   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.924527   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:45.924535   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:45.924593   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:45.956880   70152 cri.go:89] found id: ""
	I0924 19:49:45.956907   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.956914   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:45.956919   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:45.956967   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:45.990579   70152 cri.go:89] found id: ""
	I0924 19:49:45.990602   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.990614   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:45.990622   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:45.990677   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:46.025905   70152 cri.go:89] found id: ""
	I0924 19:49:46.025944   70152 logs.go:276] 0 containers: []
	W0924 19:49:46.025959   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:46.025966   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:46.026028   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:46.057401   70152 cri.go:89] found id: ""
	I0924 19:49:46.057427   70152 logs.go:276] 0 containers: []
	W0924 19:49:46.057438   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:46.057449   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:46.057463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:46.107081   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:46.107115   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:46.121398   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:46.121426   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:46.184370   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:46.184395   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:46.184410   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:46.266061   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:46.266104   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:44.477907   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.478391   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:44.489995   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.989227   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.990995   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.953875   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.955013   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.803970   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:48.816671   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:48.816737   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:48.849566   70152 cri.go:89] found id: ""
	I0924 19:49:48.849628   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.849652   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:48.849660   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:48.849720   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:48.885963   70152 cri.go:89] found id: ""
	I0924 19:49:48.885992   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.885999   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:48.886004   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:48.886054   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:48.921710   70152 cri.go:89] found id: ""
	I0924 19:49:48.921744   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.921755   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:48.921765   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:48.921821   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:48.954602   70152 cri.go:89] found id: ""
	I0924 19:49:48.954639   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.954650   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:48.954658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:48.954718   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:48.988071   70152 cri.go:89] found id: ""
	I0924 19:49:48.988098   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.988109   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:48.988117   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:48.988177   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:49.020475   70152 cri.go:89] found id: ""
	I0924 19:49:49.020503   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.020512   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:49.020519   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:49.020597   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:49.055890   70152 cri.go:89] found id: ""
	I0924 19:49:49.055915   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.055925   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:49.055933   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:49.055999   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:49.092976   70152 cri.go:89] found id: ""
	I0924 19:49:49.093010   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.093022   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:49.093033   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:49.093051   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:49.106598   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:49.106623   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:49.175320   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:49.175349   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:49.175362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:49.252922   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:49.252953   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:49.292364   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:49.292391   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:48.977530   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:50.978078   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.489983   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:53.990114   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.454857   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:53.954413   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:55.955245   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.843520   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:51.855864   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:51.855930   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:51.885300   70152 cri.go:89] found id: ""
	I0924 19:49:51.885329   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.885342   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:51.885350   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:51.885407   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:51.915183   70152 cri.go:89] found id: ""
	I0924 19:49:51.915212   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.915223   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:51.915230   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:51.915286   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:51.944774   70152 cri.go:89] found id: ""
	I0924 19:49:51.944797   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.944807   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:51.944815   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:51.944886   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:51.983691   70152 cri.go:89] found id: ""
	I0924 19:49:51.983718   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.983729   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:51.983737   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:51.983791   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:52.019728   70152 cri.go:89] found id: ""
	I0924 19:49:52.019760   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.019770   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:52.019776   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:52.019835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:52.055405   70152 cri.go:89] found id: ""
	I0924 19:49:52.055435   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.055446   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:52.055453   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:52.055518   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:52.088417   70152 cri.go:89] found id: ""
	I0924 19:49:52.088447   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.088457   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:52.088465   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:52.088527   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:52.119496   70152 cri.go:89] found id: ""
	I0924 19:49:52.119527   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.119539   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:52.119550   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:52.119563   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:52.193494   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:52.193529   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:52.231440   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:52.231464   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:52.281384   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:52.281418   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:52.293893   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:52.293919   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:52.362404   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:54.863156   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:54.876871   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:54.876946   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:54.909444   70152 cri.go:89] found id: ""
	I0924 19:49:54.909471   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.909478   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:54.909484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:54.909536   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:54.939687   70152 cri.go:89] found id: ""
	I0924 19:49:54.939715   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.939726   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:54.939733   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:54.939805   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:54.971156   70152 cri.go:89] found id: ""
	I0924 19:49:54.971180   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.971188   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:54.971193   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:54.971244   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:55.001865   70152 cri.go:89] found id: ""
	I0924 19:49:55.001891   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.001899   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:55.001904   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:55.001961   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:55.032044   70152 cri.go:89] found id: ""
	I0924 19:49:55.032072   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.032084   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:55.032092   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:55.032152   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:55.061644   70152 cri.go:89] found id: ""
	I0924 19:49:55.061667   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.061675   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:55.061681   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:55.061727   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:55.093015   70152 cri.go:89] found id: ""
	I0924 19:49:55.093041   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.093049   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:55.093055   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:55.093121   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:55.126041   70152 cri.go:89] found id: ""
	I0924 19:49:55.126065   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.126073   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:55.126081   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:55.126091   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:55.168803   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:55.168826   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:55.227121   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:55.227158   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:55.249868   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:55.249893   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:55.316401   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:55.316422   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:55.316434   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:52.978705   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:55.478802   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:56.489685   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:58.990273   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:58.453854   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.954407   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:57.898654   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:57.910667   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:57.910728   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:57.942696   70152 cri.go:89] found id: ""
	I0924 19:49:57.942722   70152 logs.go:276] 0 containers: []
	W0924 19:49:57.942730   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:57.942736   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:57.942802   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:57.981222   70152 cri.go:89] found id: ""
	I0924 19:49:57.981244   70152 logs.go:276] 0 containers: []
	W0924 19:49:57.981254   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:57.981261   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:57.981308   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:58.013135   70152 cri.go:89] found id: ""
	I0924 19:49:58.013174   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.013185   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:58.013193   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:58.013255   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:58.048815   70152 cri.go:89] found id: ""
	I0924 19:49:58.048847   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.048859   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:58.048867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:58.048933   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:58.081365   70152 cri.go:89] found id: ""
	I0924 19:49:58.081395   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.081406   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:58.081413   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:58.081478   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:58.112804   70152 cri.go:89] found id: ""
	I0924 19:49:58.112828   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.112838   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:58.112848   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:58.112913   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:58.147412   70152 cri.go:89] found id: ""
	I0924 19:49:58.147448   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.147459   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:58.147467   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:58.147529   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:58.178922   70152 cri.go:89] found id: ""
	I0924 19:49:58.178952   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.178963   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:58.178974   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:58.178993   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:58.250967   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:58.250993   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:58.251011   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:58.329734   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:58.329767   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:58.366692   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:58.366722   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:58.418466   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:58.418503   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:00.931624   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:00.949687   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:00.949756   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:01.004428   70152 cri.go:89] found id: ""
	I0924 19:50:01.004456   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.004464   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:01.004471   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:01.004532   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:01.038024   70152 cri.go:89] found id: ""
	I0924 19:50:01.038050   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.038060   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:01.038065   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:01.038111   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:01.069831   70152 cri.go:89] found id: ""
	I0924 19:50:01.069855   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.069862   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:01.069867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:01.069933   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:01.100918   70152 cri.go:89] found id: ""
	I0924 19:50:01.100944   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.100951   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:01.100957   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:01.101006   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:01.131309   70152 cri.go:89] found id: ""
	I0924 19:50:01.131340   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.131351   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:01.131359   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:01.131419   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:01.161779   70152 cri.go:89] found id: ""
	I0924 19:50:01.161806   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.161817   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:01.161825   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:01.161888   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:01.196626   70152 cri.go:89] found id: ""
	I0924 19:50:01.196655   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.196665   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:01.196672   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:01.196733   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:01.226447   70152 cri.go:89] found id: ""
	I0924 19:50:01.226475   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.226486   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:01.226496   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:01.226510   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:01.279093   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:01.279121   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:01.292435   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:01.292463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:01.360868   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:01.360901   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:01.360917   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:01.442988   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:01.443021   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:57.978989   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.477211   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:02.477451   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.990593   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:03.489738   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:02.955427   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:05.455000   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:03.984021   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:03.997429   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:03.997508   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:04.030344   70152 cri.go:89] found id: ""
	I0924 19:50:04.030374   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.030387   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:04.030395   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:04.030448   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:04.063968   70152 cri.go:89] found id: ""
	I0924 19:50:04.064003   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.064016   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:04.064023   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:04.064083   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:04.097724   70152 cri.go:89] found id: ""
	I0924 19:50:04.097752   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.097764   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:04.097772   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:04.097825   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:04.129533   70152 cri.go:89] found id: ""
	I0924 19:50:04.129570   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.129580   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:04.129588   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:04.129665   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:04.166056   70152 cri.go:89] found id: ""
	I0924 19:50:04.166086   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.166098   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:04.166105   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:04.166164   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:04.200051   70152 cri.go:89] found id: ""
	I0924 19:50:04.200077   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.200087   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:04.200094   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:04.200205   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:04.232647   70152 cri.go:89] found id: ""
	I0924 19:50:04.232671   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.232679   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:04.232686   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:04.232744   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:04.264091   70152 cri.go:89] found id: ""
	I0924 19:50:04.264115   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.264123   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:04.264131   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:04.264140   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:04.313904   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:04.313939   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:04.326759   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:04.326782   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:04.390347   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:04.390372   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:04.390389   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:04.470473   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:04.470509   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:04.478092   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:06.976928   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:05.490259   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.490644   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.954747   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:10.455548   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.009267   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:07.022465   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:07.022534   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:07.053438   70152 cri.go:89] found id: ""
	I0924 19:50:07.053466   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.053476   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:07.053484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:07.053552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:07.085802   70152 cri.go:89] found id: ""
	I0924 19:50:07.085824   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.085833   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:07.085840   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:07.085903   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:07.121020   70152 cri.go:89] found id: ""
	I0924 19:50:07.121043   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.121051   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:07.121056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:07.121108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:07.150529   70152 cri.go:89] found id: ""
	I0924 19:50:07.150557   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.150568   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:07.150576   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:07.150663   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:07.181915   70152 cri.go:89] found id: ""
	I0924 19:50:07.181942   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.181953   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:07.181959   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:07.182021   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:07.215152   70152 cri.go:89] found id: ""
	I0924 19:50:07.215185   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.215195   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:07.215203   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:07.215263   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:07.248336   70152 cri.go:89] found id: ""
	I0924 19:50:07.248365   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.248373   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:07.248378   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:07.248423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:07.281829   70152 cri.go:89] found id: ""
	I0924 19:50:07.281854   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.281862   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:07.281871   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:07.281885   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:07.329674   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:07.329706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:07.342257   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:07.342283   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:07.406426   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:07.406452   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:07.406466   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:07.493765   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:07.493796   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:10.033393   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:10.046435   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:10.046513   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:10.077993   70152 cri.go:89] found id: ""
	I0924 19:50:10.078024   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.078034   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:10.078044   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:10.078108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:10.115200   70152 cri.go:89] found id: ""
	I0924 19:50:10.115232   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.115243   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:10.115251   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:10.115317   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:10.151154   70152 cri.go:89] found id: ""
	I0924 19:50:10.151179   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.151189   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:10.151197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:10.151254   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:10.184177   70152 cri.go:89] found id: ""
	I0924 19:50:10.184204   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.184212   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:10.184218   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:10.184268   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:10.218932   70152 cri.go:89] found id: ""
	I0924 19:50:10.218962   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.218973   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:10.218981   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:10.219042   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:10.250973   70152 cri.go:89] found id: ""
	I0924 19:50:10.251001   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.251012   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:10.251020   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:10.251076   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:10.280296   70152 cri.go:89] found id: ""
	I0924 19:50:10.280319   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.280328   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:10.280333   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:10.280385   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:10.312386   70152 cri.go:89] found id: ""
	I0924 19:50:10.312411   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.312419   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:10.312426   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:10.312437   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:10.377281   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:10.377309   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:10.377326   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:10.451806   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:10.451839   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:10.489154   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:10.489184   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:10.536203   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:10.536233   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:08.977378   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:10.977966   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:09.990141   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:11.990257   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:13.990360   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:12.954861   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.454763   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:13.049785   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:13.062642   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:13.062720   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:13.096627   70152 cri.go:89] found id: ""
	I0924 19:50:13.096658   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.096669   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:13.096680   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:13.096743   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:13.127361   70152 cri.go:89] found id: ""
	I0924 19:50:13.127389   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.127400   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:13.127409   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:13.127468   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:13.160081   70152 cri.go:89] found id: ""
	I0924 19:50:13.160111   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.160123   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:13.160131   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:13.160184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:13.192955   70152 cri.go:89] found id: ""
	I0924 19:50:13.192986   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.192997   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:13.193004   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:13.193057   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:13.230978   70152 cri.go:89] found id: ""
	I0924 19:50:13.231000   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.231008   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:13.231014   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:13.231064   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:13.262146   70152 cri.go:89] found id: ""
	I0924 19:50:13.262179   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.262190   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:13.262198   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:13.262258   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:13.297019   70152 cri.go:89] found id: ""
	I0924 19:50:13.297054   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.297063   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:13.297070   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:13.297117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:13.327009   70152 cri.go:89] found id: ""
	I0924 19:50:13.327037   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.327046   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:13.327057   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:13.327073   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:13.375465   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:13.375493   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:13.389851   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:13.389884   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:13.452486   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:13.452524   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:13.452538   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:13.531372   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:13.531405   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:16.066979   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:16.079767   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:16.079825   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:16.110927   70152 cri.go:89] found id: ""
	I0924 19:50:16.110951   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.110960   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:16.110965   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:16.111011   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:16.142012   70152 cri.go:89] found id: ""
	I0924 19:50:16.142040   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.142050   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:16.142055   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:16.142112   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:16.175039   70152 cri.go:89] found id: ""
	I0924 19:50:16.175068   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.175079   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:16.175086   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:16.175146   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:16.206778   70152 cri.go:89] found id: ""
	I0924 19:50:16.206800   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.206808   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:16.206814   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:16.206890   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:16.237724   70152 cri.go:89] found id: ""
	I0924 19:50:16.237752   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.237763   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:16.237770   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:16.237835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:16.268823   70152 cri.go:89] found id: ""
	I0924 19:50:16.268846   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.268855   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:16.268861   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:16.268931   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:16.301548   70152 cri.go:89] found id: ""
	I0924 19:50:16.301570   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.301578   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:16.301584   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:16.301635   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:16.334781   70152 cri.go:89] found id: ""
	I0924 19:50:16.334812   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.334820   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:16.334844   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:16.334864   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:16.384025   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:16.384057   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:16.396528   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:16.396556   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:16.460428   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:16.460458   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:16.460472   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:12.978203   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.477525   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.478192   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.990394   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.991181   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.955580   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:20.455446   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:16.541109   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:16.541146   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:19.078388   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:19.090964   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:19.091052   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:19.122890   70152 cri.go:89] found id: ""
	I0924 19:50:19.122915   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.122923   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:19.122928   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:19.122988   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:19.155983   70152 cri.go:89] found id: ""
	I0924 19:50:19.156013   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.156024   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:19.156031   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:19.156085   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:19.190366   70152 cri.go:89] found id: ""
	I0924 19:50:19.190389   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.190397   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:19.190403   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:19.190459   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:19.221713   70152 cri.go:89] found id: ""
	I0924 19:50:19.221737   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.221745   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:19.221751   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:19.221809   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:19.256586   70152 cri.go:89] found id: ""
	I0924 19:50:19.256615   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.256625   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:19.256637   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:19.256700   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:19.288092   70152 cri.go:89] found id: ""
	I0924 19:50:19.288119   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.288130   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:19.288141   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:19.288204   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:19.320743   70152 cri.go:89] found id: ""
	I0924 19:50:19.320771   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.320780   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:19.320785   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:19.320837   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:19.352967   70152 cri.go:89] found id: ""
	I0924 19:50:19.352999   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.353009   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:19.353019   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:19.353035   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:19.365690   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:19.365715   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:19.431204   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:19.431229   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:19.431244   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:19.512030   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:19.512063   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:19.549631   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:19.549664   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:19.977859   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:21.978267   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:20.489819   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.490667   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.954178   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:24.954267   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.105290   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:22.117532   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:22.117607   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:22.147959   70152 cri.go:89] found id: ""
	I0924 19:50:22.147983   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.147994   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:22.148002   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:22.148060   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:22.178511   70152 cri.go:89] found id: ""
	I0924 19:50:22.178540   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.178551   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:22.178556   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:22.178603   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:22.210030   70152 cri.go:89] found id: ""
	I0924 19:50:22.210054   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.210061   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:22.210067   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:22.210125   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:22.243010   70152 cri.go:89] found id: ""
	I0924 19:50:22.243037   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.243048   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:22.243056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:22.243117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:22.273021   70152 cri.go:89] found id: ""
	I0924 19:50:22.273051   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.273062   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:22.273069   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:22.273133   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:22.303372   70152 cri.go:89] found id: ""
	I0924 19:50:22.303403   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.303415   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:22.303422   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:22.303481   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:22.335124   70152 cri.go:89] found id: ""
	I0924 19:50:22.335150   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.335158   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:22.335164   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:22.335222   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:22.368230   70152 cri.go:89] found id: ""
	I0924 19:50:22.368255   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.368265   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:22.368276   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:22.368290   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:22.418998   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:22.419031   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:22.431654   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:22.431684   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:22.505336   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:22.505354   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:22.505367   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:22.584941   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:22.584976   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:25.127489   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:25.140142   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:25.140216   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:25.169946   70152 cri.go:89] found id: ""
	I0924 19:50:25.169974   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.169982   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:25.169988   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:25.170049   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:25.203298   70152 cri.go:89] found id: ""
	I0924 19:50:25.203328   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.203349   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:25.203357   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:25.203419   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:25.236902   70152 cri.go:89] found id: ""
	I0924 19:50:25.236930   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.236941   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:25.236949   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:25.237011   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:25.268295   70152 cri.go:89] found id: ""
	I0924 19:50:25.268318   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.268328   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:25.268333   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:25.268388   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:25.299869   70152 cri.go:89] found id: ""
	I0924 19:50:25.299899   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.299911   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:25.299919   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:25.299978   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:25.332373   70152 cri.go:89] found id: ""
	I0924 19:50:25.332400   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.332411   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:25.332418   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:25.332477   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:25.365791   70152 cri.go:89] found id: ""
	I0924 19:50:25.365820   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.365831   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:25.365839   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:25.365904   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:25.398170   70152 cri.go:89] found id: ""
	I0924 19:50:25.398193   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.398201   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:25.398209   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:25.398220   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:25.447933   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:25.447967   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:25.461244   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:25.461269   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:25.528100   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:25.528125   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:25.528138   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:25.603029   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:25.603062   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:24.477585   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:26.477776   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:24.491205   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:26.990562   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:27.454650   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:29.954657   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:28.141635   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:28.154551   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:28.154611   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:28.186275   70152 cri.go:89] found id: ""
	I0924 19:50:28.186299   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.186307   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:28.186312   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:28.186371   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:28.218840   70152 cri.go:89] found id: ""
	I0924 19:50:28.218868   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.218879   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:28.218887   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:28.218955   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:28.253478   70152 cri.go:89] found id: ""
	I0924 19:50:28.253503   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.253512   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:28.253519   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:28.253579   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:28.284854   70152 cri.go:89] found id: ""
	I0924 19:50:28.284888   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.284899   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:28.284908   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:28.284959   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:28.315453   70152 cri.go:89] found id: ""
	I0924 19:50:28.315478   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.315487   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:28.315500   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:28.315550   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:28.347455   70152 cri.go:89] found id: ""
	I0924 19:50:28.347484   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.347492   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:28.347498   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:28.347552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:28.383651   70152 cri.go:89] found id: ""
	I0924 19:50:28.383683   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.383694   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:28.383702   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:28.383766   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:28.424649   70152 cri.go:89] found id: ""
	I0924 19:50:28.424682   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.424693   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:28.424704   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:28.424718   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:28.477985   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:28.478020   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:28.490902   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:28.490930   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:28.561252   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:28.561273   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:28.561284   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:28.635590   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:28.635635   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:31.172062   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:31.184868   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:31.184939   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:31.216419   70152 cri.go:89] found id: ""
	I0924 19:50:31.216446   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.216456   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:31.216464   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:31.216525   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:31.252757   70152 cri.go:89] found id: ""
	I0924 19:50:31.252787   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.252797   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:31.252804   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:31.252867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:31.287792   70152 cri.go:89] found id: ""
	I0924 19:50:31.287820   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.287827   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:31.287833   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:31.287883   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:31.322891   70152 cri.go:89] found id: ""
	I0924 19:50:31.322917   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.322927   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:31.322934   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:31.322997   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:31.358353   70152 cri.go:89] found id: ""
	I0924 19:50:31.358384   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.358394   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:31.358401   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:31.358461   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:31.388617   70152 cri.go:89] found id: ""
	I0924 19:50:31.388643   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.388654   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:31.388661   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:31.388714   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:31.421655   70152 cri.go:89] found id: ""
	I0924 19:50:31.421682   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.421690   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:31.421695   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:31.421747   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:31.456995   70152 cri.go:89] found id: ""
	I0924 19:50:31.457020   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.457029   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:31.457037   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:31.457048   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:28.478052   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:30.977483   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:29.490310   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:31.990052   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:33.991439   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:32.454421   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:34.456333   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:31.507691   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:31.507725   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:31.521553   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:31.521582   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:31.587673   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:31.587695   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:31.587710   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:31.674153   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:31.674193   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:34.213947   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:34.227779   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:34.227852   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:34.265513   70152 cri.go:89] found id: ""
	I0924 19:50:34.265541   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.265568   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:34.265575   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:34.265632   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:34.305317   70152 cri.go:89] found id: ""
	I0924 19:50:34.305340   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.305348   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:34.305354   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:34.305402   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:34.341144   70152 cri.go:89] found id: ""
	I0924 19:50:34.341168   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.341176   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:34.341183   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:34.341232   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:34.372469   70152 cri.go:89] found id: ""
	I0924 19:50:34.372491   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.372499   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:34.372505   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:34.372551   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:34.408329   70152 cri.go:89] found id: ""
	I0924 19:50:34.408351   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.408360   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:34.408365   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:34.408423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:34.440666   70152 cri.go:89] found id: ""
	I0924 19:50:34.440695   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.440707   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:34.440714   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:34.440782   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:34.475013   70152 cri.go:89] found id: ""
	I0924 19:50:34.475040   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.475047   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:34.475053   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:34.475105   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:34.507051   70152 cri.go:89] found id: ""
	I0924 19:50:34.507077   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.507084   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:34.507092   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:34.507102   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:34.562506   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:34.562549   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:34.575316   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:34.575340   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:34.641903   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:34.641927   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:34.641938   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:34.719868   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:34.719903   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:32.978271   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:35.477581   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:37.479350   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:36.490263   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:38.490795   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:36.953906   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:38.955474   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:37.279465   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:37.291991   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:37.292065   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:37.322097   70152 cri.go:89] found id: ""
	I0924 19:50:37.322123   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.322134   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:37.322141   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:37.322199   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:37.353697   70152 cri.go:89] found id: ""
	I0924 19:50:37.353729   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.353740   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:37.353748   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:37.353807   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:37.385622   70152 cri.go:89] found id: ""
	I0924 19:50:37.385653   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.385664   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:37.385672   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:37.385735   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:37.420972   70152 cri.go:89] found id: ""
	I0924 19:50:37.420995   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.421004   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:37.421012   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:37.421070   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:37.451496   70152 cri.go:89] found id: ""
	I0924 19:50:37.451523   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.451534   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:37.451541   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:37.451619   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:37.486954   70152 cri.go:89] found id: ""
	I0924 19:50:37.486982   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.486992   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:37.487000   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:37.487061   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:37.523068   70152 cri.go:89] found id: ""
	I0924 19:50:37.523089   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.523097   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:37.523105   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:37.523165   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:37.559935   70152 cri.go:89] found id: ""
	I0924 19:50:37.559962   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.559970   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:37.559978   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:37.559988   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:37.597976   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:37.598006   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:37.647577   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:37.647610   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:37.660872   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:37.660901   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:37.728264   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:37.728293   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:37.728307   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:40.308026   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:40.320316   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:40.320373   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:40.357099   70152 cri.go:89] found id: ""
	I0924 19:50:40.357127   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.357137   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:40.357145   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:40.357207   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:40.390676   70152 cri.go:89] found id: ""
	I0924 19:50:40.390703   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.390712   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:40.390717   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:40.390766   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:40.422752   70152 cri.go:89] found id: ""
	I0924 19:50:40.422784   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.422796   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:40.422804   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:40.422887   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:40.457024   70152 cri.go:89] found id: ""
	I0924 19:50:40.457046   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.457054   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:40.457059   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:40.457106   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:40.503120   70152 cri.go:89] found id: ""
	I0924 19:50:40.503149   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.503160   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:40.503168   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:40.503225   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:40.543399   70152 cri.go:89] found id: ""
	I0924 19:50:40.543426   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.543435   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:40.543441   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:40.543487   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:40.577654   70152 cri.go:89] found id: ""
	I0924 19:50:40.577679   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.577690   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:40.577698   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:40.577754   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:40.610097   70152 cri.go:89] found id: ""
	I0924 19:50:40.610120   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.610128   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:40.610136   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:40.610145   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:40.661400   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:40.661436   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:40.674254   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:40.674284   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:40.740319   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:40.740342   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:40.740352   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:40.818666   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:40.818704   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:39.979184   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:41.981561   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:40.491417   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:42.991420   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:41.454480   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:43.456158   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:45.955070   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:43.356693   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:43.369234   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:43.369295   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:43.407933   70152 cri.go:89] found id: ""
	I0924 19:50:43.407960   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.407971   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:43.407978   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:43.408037   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:43.442923   70152 cri.go:89] found id: ""
	I0924 19:50:43.442956   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.442968   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:43.442979   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:43.443029   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:43.478148   70152 cri.go:89] found id: ""
	I0924 19:50:43.478177   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.478189   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:43.478197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:43.478256   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:43.515029   70152 cri.go:89] found id: ""
	I0924 19:50:43.515060   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.515071   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:43.515079   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:43.515144   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:43.551026   70152 cri.go:89] found id: ""
	I0924 19:50:43.551058   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.551070   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:43.551077   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:43.551140   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:43.587155   70152 cri.go:89] found id: ""
	I0924 19:50:43.587188   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.587197   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:43.587205   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:43.587263   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:43.620935   70152 cri.go:89] found id: ""
	I0924 19:50:43.620958   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.620976   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:43.620984   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:43.621045   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:43.654477   70152 cri.go:89] found id: ""
	I0924 19:50:43.654512   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.654523   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:43.654534   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:43.654546   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:43.689352   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:43.689385   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:43.742646   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:43.742683   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:43.755773   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:43.755798   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:43.818546   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:43.818577   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:43.818595   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:46.397466   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:46.410320   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:46.410392   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:46.443003   70152 cri.go:89] found id: ""
	I0924 19:50:46.443029   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.443041   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:46.443049   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:46.443114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:46.484239   70152 cri.go:89] found id: ""
	I0924 19:50:46.484264   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.484274   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:46.484282   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:46.484339   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:43.981787   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:46.478489   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:45.489723   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:47.491171   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:47.955545   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:50.454211   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:46.519192   70152 cri.go:89] found id: ""
	I0924 19:50:46.519221   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.519230   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:46.519236   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:46.519286   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:46.554588   70152 cri.go:89] found id: ""
	I0924 19:50:46.554611   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.554619   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:46.554626   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:46.554685   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:46.586074   70152 cri.go:89] found id: ""
	I0924 19:50:46.586101   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.586110   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:46.586116   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:46.586167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:46.620119   70152 cri.go:89] found id: ""
	I0924 19:50:46.620149   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.620159   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:46.620166   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:46.620226   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:46.653447   70152 cri.go:89] found id: ""
	I0924 19:50:46.653477   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.653488   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:46.653495   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:46.653557   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:46.686079   70152 cri.go:89] found id: ""
	I0924 19:50:46.686105   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.686116   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:46.686127   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:46.686140   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:46.699847   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:46.699891   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:46.766407   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:46.766432   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:46.766447   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:46.846697   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:46.846730   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:46.901551   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:46.901578   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:49.460047   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:49.473516   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:49.473586   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:49.508180   70152 cri.go:89] found id: ""
	I0924 19:50:49.508211   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.508220   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:49.508226   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:49.508289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:49.540891   70152 cri.go:89] found id: ""
	I0924 19:50:49.540920   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.540928   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:49.540934   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:49.540984   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:49.577008   70152 cri.go:89] found id: ""
	I0924 19:50:49.577038   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.577048   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:49.577054   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:49.577132   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:49.615176   70152 cri.go:89] found id: ""
	I0924 19:50:49.615206   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.615216   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:49.615226   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:49.615289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:49.653135   70152 cri.go:89] found id: ""
	I0924 19:50:49.653167   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.653177   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:49.653184   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:49.653250   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:49.691032   70152 cri.go:89] found id: ""
	I0924 19:50:49.691064   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.691074   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:49.691080   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:49.691143   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:49.725243   70152 cri.go:89] found id: ""
	I0924 19:50:49.725274   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.725287   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:49.725294   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:49.725363   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:49.759288   70152 cri.go:89] found id: ""
	I0924 19:50:49.759316   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.759325   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:49.759333   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:49.759345   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:49.831323   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:49.831345   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:49.831362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:49.907302   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:49.907336   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:49.946386   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:49.946424   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:50.002321   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:50.002362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:48.978153   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:51.477442   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:49.991214   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.490034   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.454585   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:54.455120   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.517380   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:52.531613   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:52.531671   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:52.568158   70152 cri.go:89] found id: ""
	I0924 19:50:52.568188   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.568199   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:52.568207   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:52.568258   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:52.606203   70152 cri.go:89] found id: ""
	I0924 19:50:52.606232   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.606241   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:52.606247   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:52.606307   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:52.647180   70152 cri.go:89] found id: ""
	I0924 19:50:52.647206   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.647218   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:52.647226   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:52.647290   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:52.692260   70152 cri.go:89] found id: ""
	I0924 19:50:52.692289   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.692308   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:52.692316   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:52.692382   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:52.745648   70152 cri.go:89] found id: ""
	I0924 19:50:52.745673   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.745684   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:52.745693   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:52.745759   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:52.782429   70152 cri.go:89] found id: ""
	I0924 19:50:52.782451   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.782458   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:52.782463   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:52.782510   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:52.817286   70152 cri.go:89] found id: ""
	I0924 19:50:52.817312   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.817320   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:52.817326   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:52.817387   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:52.851401   70152 cri.go:89] found id: ""
	I0924 19:50:52.851433   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.851442   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:52.851451   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:52.851463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:52.921634   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:52.921661   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:52.921674   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:53.005676   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:53.005710   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:53.042056   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:53.042092   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:53.092871   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:53.092908   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:55.605865   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:55.618713   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:55.618791   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:55.652326   70152 cri.go:89] found id: ""
	I0924 19:50:55.652354   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.652364   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:55.652372   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:55.652434   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:55.686218   70152 cri.go:89] found id: ""
	I0924 19:50:55.686241   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.686249   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:55.686256   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:55.686318   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:55.718678   70152 cri.go:89] found id: ""
	I0924 19:50:55.718704   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.718713   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:55.718720   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:55.718789   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:55.750122   70152 cri.go:89] found id: ""
	I0924 19:50:55.750149   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.750157   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:55.750163   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:55.750213   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:55.780676   70152 cri.go:89] found id: ""
	I0924 19:50:55.780706   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.780717   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:55.780724   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:55.780806   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:55.814742   70152 cri.go:89] found id: ""
	I0924 19:50:55.814771   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.814783   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:55.814790   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:55.814872   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:55.847599   70152 cri.go:89] found id: ""
	I0924 19:50:55.847624   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.847635   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:55.847643   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:55.847708   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:55.882999   70152 cri.go:89] found id: ""
	I0924 19:50:55.883025   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.883034   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:55.883042   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:55.883053   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:55.948795   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:55.948823   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:55.948840   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:56.032946   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:56.032984   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:56.069628   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:56.069657   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:56.118408   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:56.118444   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:53.478043   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:53.979410   69576 pod_ready.go:82] duration metric: took 4m0.007472265s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	E0924 19:50:53.979439   69576 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0924 19:50:53.979449   69576 pod_ready.go:39] duration metric: took 4m5.045187364s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:50:53.979468   69576 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:50:53.979501   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:53.979557   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:54.014613   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:54.014636   69576 cri.go:89] found id: ""
	I0924 19:50:54.014646   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:50:54.014702   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.019232   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:54.019304   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:54.054018   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:54.054042   69576 cri.go:89] found id: ""
	I0924 19:50:54.054050   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:50:54.054111   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.057867   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:54.057937   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:54.090458   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:54.090485   69576 cri.go:89] found id: ""
	I0924 19:50:54.090495   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:50:54.090549   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.094660   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:54.094735   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:54.128438   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:54.128462   69576 cri.go:89] found id: ""
	I0924 19:50:54.128471   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:50:54.128524   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.132209   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:54.132261   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:54.170563   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:54.170584   69576 cri.go:89] found id: ""
	I0924 19:50:54.170591   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:50:54.170640   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.174546   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:54.174615   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:54.211448   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:54.211468   69576 cri.go:89] found id: ""
	I0924 19:50:54.211475   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:50:54.211521   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.215297   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:54.215350   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:54.252930   69576 cri.go:89] found id: ""
	I0924 19:50:54.252955   69576 logs.go:276] 0 containers: []
	W0924 19:50:54.252963   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:54.252969   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:50:54.253023   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:50:54.296111   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:54.296135   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:54.296141   69576 cri.go:89] found id: ""
	I0924 19:50:54.296148   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:50:54.296194   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.299983   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.303864   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:50:54.303899   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:54.340679   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:54.340703   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:54.867298   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:50:54.867333   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:54.908630   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:54.908659   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:54.974028   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:50:54.974059   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:55.034164   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:50:55.034200   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:55.070416   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:50:55.070453   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:55.107831   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:50:55.107857   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:55.143183   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:55.143215   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:55.160049   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:55.160082   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:50:55.267331   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:50:55.267367   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:55.310718   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:50:55.310750   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:55.349628   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:50:55.349656   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:54.990762   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:57.490198   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:56.954742   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:58.955989   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:58.631571   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:58.645369   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:58.645437   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:58.679988   70152 cri.go:89] found id: ""
	I0924 19:50:58.680016   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.680027   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:58.680034   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:58.680095   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:58.717081   70152 cri.go:89] found id: ""
	I0924 19:50:58.717104   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.717114   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:58.717121   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:58.717182   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:58.749093   70152 cri.go:89] found id: ""
	I0924 19:50:58.749115   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.749124   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:58.749129   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:58.749175   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:58.785026   70152 cri.go:89] found id: ""
	I0924 19:50:58.785056   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.785078   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:58.785086   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:58.785174   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:58.821615   70152 cri.go:89] found id: ""
	I0924 19:50:58.821641   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.821651   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:58.821658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:58.821718   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:58.857520   70152 cri.go:89] found id: ""
	I0924 19:50:58.857549   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.857561   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:58.857569   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:58.857638   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:58.892972   70152 cri.go:89] found id: ""
	I0924 19:50:58.892997   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.893008   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:58.893016   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:58.893082   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:58.924716   70152 cri.go:89] found id: ""
	I0924 19:50:58.924743   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.924756   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:58.924764   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:58.924776   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:58.961221   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:58.961249   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:59.013865   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:59.013892   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:59.028436   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:59.028472   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:59.099161   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:59.099187   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:59.099201   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:57.916622   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:57.931591   69576 api_server.go:72] duration metric: took 4m15.73662766s to wait for apiserver process to appear ...
	I0924 19:50:57.931630   69576 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:50:57.931675   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:57.931721   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:57.969570   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:57.969597   69576 cri.go:89] found id: ""
	I0924 19:50:57.969604   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:50:57.969650   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:57.973550   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:57.973602   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:58.015873   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:58.015897   69576 cri.go:89] found id: ""
	I0924 19:50:58.015907   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:50:58.015959   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.020777   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:58.020848   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:58.052771   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:58.052792   69576 cri.go:89] found id: ""
	I0924 19:50:58.052801   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:50:58.052861   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.056640   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:58.056709   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:58.092869   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:58.092888   69576 cri.go:89] found id: ""
	I0924 19:50:58.092894   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:50:58.092949   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.097223   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:58.097293   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:58.131376   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:58.131403   69576 cri.go:89] found id: ""
	I0924 19:50:58.131414   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:50:58.131498   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.135886   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:58.135943   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:58.171962   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:58.171985   69576 cri.go:89] found id: ""
	I0924 19:50:58.171992   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:50:58.172037   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.175714   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:58.175770   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:58.209329   69576 cri.go:89] found id: ""
	I0924 19:50:58.209358   69576 logs.go:276] 0 containers: []
	W0924 19:50:58.209366   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:58.209372   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:50:58.209432   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:50:58.242311   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:58.242331   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:58.242336   69576 cri.go:89] found id: ""
	I0924 19:50:58.242344   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:50:58.242399   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.246774   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.250891   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:58.250909   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:58.736768   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:58.736811   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:50:58.838645   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:50:58.838673   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:58.884334   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:50:58.884366   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:58.933785   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:50:58.933817   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:58.968065   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:50:58.968099   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:59.007212   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:50:59.007238   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:59.067571   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:50:59.067608   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:59.103890   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:50:59.103913   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:59.157991   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:59.158021   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:59.225690   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:59.225724   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:59.239742   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:50:59.239768   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:59.272319   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:50:59.272354   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:01.809089   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:51:01.813972   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0924 19:51:01.815080   69576 api_server.go:141] control plane version: v1.31.1
	I0924 19:51:01.815100   69576 api_server.go:131] duration metric: took 3.883463484s to wait for apiserver health ...
	I0924 19:51:01.815107   69576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:51:01.815127   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:51:01.815166   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:51:01.857140   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:51:01.857164   69576 cri.go:89] found id: ""
	I0924 19:51:01.857174   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:51:01.857235   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.861136   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:51:01.861199   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:51:01.894133   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:51:01.894156   69576 cri.go:89] found id: ""
	I0924 19:51:01.894165   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:51:01.894222   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.898001   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:51:01.898073   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:51:01.933652   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:51:01.933677   69576 cri.go:89] found id: ""
	I0924 19:51:01.933686   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:51:01.933762   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.938487   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:51:01.938549   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:51:01.979500   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:01.979527   69576 cri.go:89] found id: ""
	I0924 19:51:01.979536   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:51:01.979597   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.983762   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:51:01.983827   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:51:02.024402   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:51:02.024427   69576 cri.go:89] found id: ""
	I0924 19:51:02.024436   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:51:02.024501   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.028273   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:51:02.028330   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:51:02.070987   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:51:02.071006   69576 cri.go:89] found id: ""
	I0924 19:51:02.071013   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:51:02.071058   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.076176   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:51:02.076244   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:51:02.119921   69576 cri.go:89] found id: ""
	I0924 19:51:02.119950   69576 logs.go:276] 0 containers: []
	W0924 19:51:02.119960   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:51:02.119967   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:51:02.120026   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:51:02.156531   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:51:02.156562   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:51:02.156568   69576 cri.go:89] found id: ""
	I0924 19:51:02.156577   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:51:02.156643   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.161262   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.165581   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:51:02.165602   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:51:02.216300   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:51:02.216327   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:51:02.262879   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:51:02.262909   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:59.490689   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:01.992004   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:02.984419   69904 pod_ready.go:82] duration metric: took 4m0.00033045s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" ...
	E0924 19:51:02.984461   69904 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 19:51:02.984478   69904 pod_ready.go:39] duration metric: took 4m13.271652912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:02.984508   69904 kubeadm.go:597] duration metric: took 4m21.208228185s to restartPrimaryControlPlane
	W0924 19:51:02.984576   69904 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:02.984610   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:02.643876   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:51:02.643917   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:51:02.680131   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:51:02.680170   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:51:02.693192   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:51:02.693225   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:51:02.788649   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:51:02.788678   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:51:02.836539   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:51:02.836571   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:51:02.889363   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:51:02.889393   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:02.925388   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:51:02.925416   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:51:02.962512   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:51:02.962545   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:51:02.999119   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:51:02.999144   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:51:03.072647   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:51:03.072683   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:51:05.629114   69576 system_pods.go:59] 8 kube-system pods found
	I0924 19:51:05.629141   69576 system_pods.go:61] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running
	I0924 19:51:05.629145   69576 system_pods.go:61] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running
	I0924 19:51:05.629149   69576 system_pods.go:61] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running
	I0924 19:51:05.629153   69576 system_pods.go:61] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running
	I0924 19:51:05.629156   69576 system_pods.go:61] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running
	I0924 19:51:05.629159   69576 system_pods.go:61] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running
	I0924 19:51:05.629164   69576 system_pods.go:61] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:05.629169   69576 system_pods.go:61] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:51:05.629177   69576 system_pods.go:74] duration metric: took 3.814063168s to wait for pod list to return data ...
	I0924 19:51:05.629183   69576 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:51:05.632105   69576 default_sa.go:45] found service account: "default"
	I0924 19:51:05.632126   69576 default_sa.go:55] duration metric: took 2.937635ms for default service account to be created ...
	I0924 19:51:05.632133   69576 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:51:05.637121   69576 system_pods.go:86] 8 kube-system pods found
	I0924 19:51:05.637152   69576 system_pods.go:89] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running
	I0924 19:51:05.637160   69576 system_pods.go:89] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running
	I0924 19:51:05.637167   69576 system_pods.go:89] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running
	I0924 19:51:05.637174   69576 system_pods.go:89] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running
	I0924 19:51:05.637179   69576 system_pods.go:89] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running
	I0924 19:51:05.637185   69576 system_pods.go:89] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running
	I0924 19:51:05.637196   69576 system_pods.go:89] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:05.637205   69576 system_pods.go:89] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:51:05.637214   69576 system_pods.go:126] duration metric: took 5.075319ms to wait for k8s-apps to be running ...
	I0924 19:51:05.637222   69576 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:51:05.637264   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:05.654706   69576 system_svc.go:56] duration metric: took 17.472783ms WaitForService to wait for kubelet
	I0924 19:51:05.654809   69576 kubeadm.go:582] duration metric: took 4m23.459841471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:51:05.654865   69576 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:51:05.658334   69576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:51:05.658353   69576 node_conditions.go:123] node cpu capacity is 2
	I0924 19:51:05.658363   69576 node_conditions.go:105] duration metric: took 3.492035ms to run NodePressure ...
	I0924 19:51:05.658373   69576 start.go:241] waiting for startup goroutines ...
	I0924 19:51:05.658379   69576 start.go:246] waiting for cluster config update ...
	I0924 19:51:05.658389   69576 start.go:255] writing updated cluster config ...
	I0924 19:51:05.658691   69576 ssh_runner.go:195] Run: rm -f paused
	I0924 19:51:05.706059   69576 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:51:05.708303   69576 out.go:177] * Done! kubectl is now configured to use "no-preload-965745" cluster and "default" namespace by default
	I0924 19:51:01.454367   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:03.954114   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:05.955269   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:01.696298   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:01.709055   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:51:01.709132   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:51:01.741383   70152 cri.go:89] found id: ""
	I0924 19:51:01.741409   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.741416   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:51:01.741422   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:51:01.741476   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:51:01.773123   70152 cri.go:89] found id: ""
	I0924 19:51:01.773148   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.773156   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:51:01.773162   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:51:01.773221   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:51:01.806752   70152 cri.go:89] found id: ""
	I0924 19:51:01.806784   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.806792   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:51:01.806798   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:51:01.806928   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:51:01.851739   70152 cri.go:89] found id: ""
	I0924 19:51:01.851769   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.851780   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:51:01.851786   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:51:01.851850   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:51:01.885163   70152 cri.go:89] found id: ""
	I0924 19:51:01.885192   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.885201   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:51:01.885207   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:51:01.885255   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:51:01.918891   70152 cri.go:89] found id: ""
	I0924 19:51:01.918918   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.918929   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:51:01.918936   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:51:01.918996   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:51:01.953367   70152 cri.go:89] found id: ""
	I0924 19:51:01.953394   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.953403   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:51:01.953411   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:51:01.953468   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:51:01.993937   70152 cri.go:89] found id: ""
	I0924 19:51:01.993961   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.993970   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:51:01.993981   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:51:01.993993   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:51:02.049467   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:51:02.049503   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:51:02.065074   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:51:02.065117   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:51:02.141811   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:51:02.141837   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:51:02.141852   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:51:02.224507   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:51:02.224534   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:51:04.766806   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:04.779518   70152 kubeadm.go:597] duration metric: took 4m3.458373s to restartPrimaryControlPlane
	W0924 19:51:04.779588   70152 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:04.779617   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:09.285959   70152 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.506320559s)
	I0924 19:51:09.286033   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:09.299784   70152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:51:09.311238   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:51:09.320580   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:51:09.320603   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:51:09.320658   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:51:09.329216   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:51:09.329281   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:51:09.337964   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:51:09.346324   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:51:09.346383   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:51:09.354788   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:51:09.363191   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:51:09.363249   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:51:09.372141   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:51:09.380290   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:51:09.380344   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:51:09.388996   70152 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:51:09.456034   70152 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:51:09.456144   70152 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:51:09.585473   70152 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:51:09.585697   70152 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:51:09.585935   70152 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:51:09.749623   70152 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:51:09.751504   70152 out.go:235]   - Generating certificates and keys ...
	I0924 19:51:09.751599   70152 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:51:09.751702   70152 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:51:09.751845   70152 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:51:09.751955   70152 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:51:09.752059   70152 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:51:09.752137   70152 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:51:09.752237   70152 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:51:09.752332   70152 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:51:09.752430   70152 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:51:09.752536   70152 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:51:09.752602   70152 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:51:09.752683   70152 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:51:09.881554   70152 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:51:10.269203   70152 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:51:10.518480   70152 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:51:10.712060   70152 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:51:10.727454   70152 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:51:10.728411   70152 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:51:10.728478   70152 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:51:10.849448   70152 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:51:08.454350   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:10.455005   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:10.851100   70152 out.go:235]   - Booting up control plane ...
	I0924 19:51:10.851237   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:51:10.860097   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:51:10.860987   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:51:10.861716   70152 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:51:10.863845   70152 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:51:12.954243   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:14.957843   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:17.453731   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:19.453953   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:21.454522   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:23.455166   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:25.953843   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:29.077330   69904 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.092691625s)
	I0924 19:51:29.077484   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:29.091493   69904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:51:29.101026   69904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:51:29.109749   69904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:51:29.109768   69904 kubeadm.go:157] found existing configuration files:
	
	I0924 19:51:29.109814   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 19:51:29.118177   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:51:29.118225   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:51:29.126963   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 19:51:29.135458   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:51:29.135514   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:51:29.144373   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 19:51:29.153026   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:51:29.153104   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:51:29.162719   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 19:51:29.171667   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:51:29.171722   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:51:29.180370   69904 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:51:29.220747   69904 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 19:51:29.220873   69904 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:51:29.319144   69904 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:51:29.319289   69904 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:51:29.319416   69904 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 19:51:29.328410   69904 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:51:29.329855   69904 out.go:235]   - Generating certificates and keys ...
	I0924 19:51:29.329956   69904 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:51:29.330042   69904 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:51:29.330148   69904 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:51:29.330251   69904 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:51:29.330369   69904 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:51:29.330451   69904 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:51:29.330557   69904 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:51:29.330668   69904 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:51:29.330772   69904 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:51:29.330900   69904 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:51:29.330966   69904 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:51:29.331042   69904 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:51:29.504958   69904 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:51:29.642370   69904 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 19:51:29.735556   69904 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:51:29.870700   69904 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:51:30.048778   69904 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:51:30.049481   69904 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:51:30.052686   69904 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:51:27.954118   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:29.955223   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:30.054684   69904 out.go:235]   - Booting up control plane ...
	I0924 19:51:30.054786   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:51:30.054935   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:51:30.055710   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:51:30.073679   69904 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:51:30.079375   69904 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:51:30.079437   69904 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:51:30.208692   69904 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 19:51:30.208799   69904 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 19:51:31.210485   69904 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001878491s
	I0924 19:51:31.210602   69904 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 19:51:35.712648   69904 kubeadm.go:310] [api-check] The API server is healthy after 4.501942024s
	I0924 19:51:35.726167   69904 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 19:51:35.745115   69904 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 19:51:35.778631   69904 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 19:51:35.778910   69904 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-093771 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 19:51:35.793809   69904 kubeadm.go:310] [bootstrap-token] Using token: joc3du.4csctmt42s6jz0an
	I0924 19:51:31.955402   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:33.956250   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:35.949705   69408 pod_ready.go:82] duration metric: took 4m0.001155579s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" ...
	E0924 19:51:35.949733   69408 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 19:51:35.949755   69408 pod_ready.go:39] duration metric: took 4m8.530526042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:35.949787   69408 kubeadm.go:597] duration metric: took 4m16.768464943s to restartPrimaryControlPlane
	W0924 19:51:35.949874   69408 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:35.949908   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:35.795255   69904 out.go:235]   - Configuring RBAC rules ...
	I0924 19:51:35.795389   69904 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 19:51:35.800809   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 19:51:35.819531   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 19:51:35.825453   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 19:51:35.831439   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 19:51:35.835651   69904 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 19:51:36.119903   69904 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 19:51:36.554891   69904 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 19:51:37.120103   69904 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 19:51:37.121012   69904 kubeadm.go:310] 
	I0924 19:51:37.121125   69904 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 19:51:37.121146   69904 kubeadm.go:310] 
	I0924 19:51:37.121242   69904 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 19:51:37.121260   69904 kubeadm.go:310] 
	I0924 19:51:37.121309   69904 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 19:51:37.121403   69904 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 19:51:37.121469   69904 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 19:51:37.121477   69904 kubeadm.go:310] 
	I0924 19:51:37.121557   69904 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 19:51:37.121578   69904 kubeadm.go:310] 
	I0924 19:51:37.121659   69904 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 19:51:37.121674   69904 kubeadm.go:310] 
	I0924 19:51:37.121765   69904 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 19:51:37.121891   69904 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 19:51:37.122002   69904 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 19:51:37.122013   69904 kubeadm.go:310] 
	I0924 19:51:37.122122   69904 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 19:51:37.122239   69904 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 19:51:37.122247   69904 kubeadm.go:310] 
	I0924 19:51:37.122333   69904 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token joc3du.4csctmt42s6jz0an \
	I0924 19:51:37.122470   69904 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 19:51:37.122509   69904 kubeadm.go:310] 	--control-plane 
	I0924 19:51:37.122520   69904 kubeadm.go:310] 
	I0924 19:51:37.122598   69904 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 19:51:37.122606   69904 kubeadm.go:310] 
	I0924 19:51:37.122720   69904 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token joc3du.4csctmt42s6jz0an \
	I0924 19:51:37.122884   69904 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 19:51:37.124443   69904 kubeadm.go:310] W0924 19:51:29.206815    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:51:37.124730   69904 kubeadm.go:310] W0924 19:51:29.207506    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:51:37.124872   69904 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:51:37.124908   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:51:37.124921   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:51:37.126897   69904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:51:37.128457   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:51:37.138516   69904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:51:37.154747   69904 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:51:37.154812   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:37.154860   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-093771 minikube.k8s.io/updated_at=2024_09_24T19_51_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=default-k8s-diff-port-093771 minikube.k8s.io/primary=true
	I0924 19:51:37.178892   69904 ops.go:34] apiserver oom_adj: -16
	I0924 19:51:37.364019   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:37.864960   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:38.364223   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:38.864189   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:39.365144   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:39.864326   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:40.364143   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:40.864333   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:41.364236   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:41.461496   69904 kubeadm.go:1113] duration metric: took 4.30674912s to wait for elevateKubeSystemPrivileges
	I0924 19:51:41.461536   69904 kubeadm.go:394] duration metric: took 4m59.728895745s to StartCluster
	I0924 19:51:41.461557   69904 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:51:41.461654   69904 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:51:41.464153   69904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:51:41.464416   69904 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:51:41.464620   69904 config.go:182] Loaded profile config "default-k8s-diff-port-093771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:51:41.464553   69904 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:51:41.464699   69904 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464718   69904 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-093771"
	I0924 19:51:41.464722   69904 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464753   69904 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-093771"
	I0924 19:51:41.464753   69904 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464774   69904 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-093771"
	W0924 19:51:41.464786   69904 addons.go:243] addon metrics-server should already be in state true
	I0924 19:51:41.464824   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	W0924 19:51:41.464729   69904 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:51:41.464894   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	I0924 19:51:41.465192   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465211   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465211   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465242   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.465280   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.465229   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.466016   69904 out.go:177] * Verifying Kubernetes components...
	I0924 19:51:41.467370   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:51:41.480937   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0924 19:51:41.481105   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46867
	I0924 19:51:41.481377   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.481596   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.482008   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.482032   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.482119   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.482139   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.482420   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.482453   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.482636   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.483038   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.483079   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.484535   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
	I0924 19:51:41.486427   69904 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-093771"
	W0924 19:51:41.486572   69904 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:51:41.486612   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	I0924 19:51:41.486941   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.487097   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.487145   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.487517   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.487536   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.487866   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.488447   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.488493   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.502934   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0924 19:51:41.503244   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45593
	I0924 19:51:41.503446   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.503810   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.503904   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.503920   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.504266   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.504281   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.504327   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.504742   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.504768   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.505104   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.505295   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.508446   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I0924 19:51:41.508449   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.508839   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.509365   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.509388   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.509739   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.509898   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.510390   69904 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:51:41.511622   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.511801   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:51:41.511818   69904 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:51:41.511838   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.513430   69904 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:51:41.514819   69904 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:51:41.514853   69904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:51:41.514871   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.515131   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.515838   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.515903   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.515983   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.516096   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.516270   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.516423   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.518636   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.519167   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.519192   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.519477   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.519709   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.519885   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.520037   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.522168   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0924 19:51:41.522719   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.523336   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.523360   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.523663   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.523857   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.525469   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.525702   69904 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:51:41.525718   69904 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:51:41.525738   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.528613   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.529122   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.529142   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.529384   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.529572   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.529764   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.529913   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.666584   69904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:51:41.685485   69904 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-093771" to be "Ready" ...
	I0924 19:51:41.701712   69904 node_ready.go:49] node "default-k8s-diff-port-093771" has status "Ready":"True"
	I0924 19:51:41.701735   69904 node_ready.go:38] duration metric: took 16.218729ms for node "default-k8s-diff-port-093771" to be "Ready" ...
	I0924 19:51:41.701745   69904 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:41.732271   69904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:41.759846   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:51:41.850208   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:51:41.854353   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:51:41.854372   69904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:51:41.884080   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:51:41.884109   69904 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:51:41.924130   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:51:41.924161   69904 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:51:41.956667   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.956699   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.957030   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.957044   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.957051   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.957058   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.957319   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.957378   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.957353   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:41.964614   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.964632   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.964934   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.964953   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.988158   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:51:42.871520   69904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.021277105s)
	I0924 19:51:42.871575   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:42.871586   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:42.871871   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:42.871892   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:42.871905   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:42.871918   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:42.872184   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:42.872237   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:42.872259   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:43.106973   69904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.118760493s)
	I0924 19:51:43.107032   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:43.107047   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:43.107342   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:43.107375   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:43.107389   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:43.107403   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:43.107414   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:43.107682   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:43.107697   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:43.107715   69904 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-093771"
	I0924 19:51:43.109818   69904 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0924 19:51:43.111542   69904 addons.go:510] duration metric: took 1.646997004s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0924 19:51:43.738989   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:45.738584   69904 pod_ready.go:93] pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:45.738610   69904 pod_ready.go:82] duration metric: took 4.006305736s for pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:45.738622   69904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:47.746429   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:50.864744   70152 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:51:50.865098   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:51:50.865318   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:51:50.245581   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:51.745840   69904 pod_ready.go:93] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.745870   69904 pod_ready.go:82] duration metric: took 6.007240203s for pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.745888   69904 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.754529   69904 pod_ready.go:93] pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.754556   69904 pod_ready.go:82] duration metric: took 8.660403ms for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.754569   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.764561   69904 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.764589   69904 pod_ready.go:82] duration metric: took 10.010012ms for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.764603   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.771177   69904 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.771205   69904 pod_ready.go:82] duration metric: took 6.593267ms for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.771218   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5rw7b" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.775929   69904 pod_ready.go:93] pod "kube-proxy-5rw7b" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.775952   69904 pod_ready.go:82] duration metric: took 4.726185ms for pod "kube-proxy-5rw7b" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.775964   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:52.143343   69904 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:52.143367   69904 pod_ready.go:82] duration metric: took 367.395759ms for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:52.143375   69904 pod_ready.go:39] duration metric: took 10.441621626s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:52.143388   69904 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:51:52.143433   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:52.157316   69904 api_server.go:72] duration metric: took 10.69286406s to wait for apiserver process to appear ...
	I0924 19:51:52.157344   69904 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:51:52.157363   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:51:52.162550   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 200:
	ok
	I0924 19:51:52.163431   69904 api_server.go:141] control plane version: v1.31.1
	I0924 19:51:52.163453   69904 api_server.go:131] duration metric: took 6.102223ms to wait for apiserver health ...
	I0924 19:51:52.163465   69904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:51:52.346998   69904 system_pods.go:59] 9 kube-system pods found
	I0924 19:51:52.347026   69904 system_pods.go:61] "coredns-7c65d6cfc9-87t62" [b4be73eb-defb-4cc1-84f7-d34dccab4a2c] Running
	I0924 19:51:52.347031   69904 system_pods.go:61] "coredns-7c65d6cfc9-nzssp" [ecf276cd-9aa0-4a0b-81b6-da38271d10ed] Running
	I0924 19:51:52.347036   69904 system_pods.go:61] "etcd-default-k8s-diff-port-093771" [809f2c90-7cfc-4c77-a078-7883a7c6f2ac] Running
	I0924 19:51:52.347039   69904 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-093771" [2d297125-52bd-4c17-ab57-89911bb046e7] Running
	I0924 19:51:52.347043   69904 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-093771" [9e3c3d16-5e5d-4ebf-9ade-24cb40b9e836] Running
	I0924 19:51:52.347046   69904 system_pods.go:61] "kube-proxy-5rw7b" [f2916b6c-1a6f-4766-8543-0d846f559710] Running
	I0924 19:51:52.347049   69904 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-093771" [d1db09ad-d2e9-4453-b354-379bbb4081bf] Running
	I0924 19:51:52.347055   69904 system_pods.go:61] "metrics-server-6867b74b74-gnlkd" [a3b6c4f7-47e1-48a3-adff-1690db5cea3b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:52.347059   69904 system_pods.go:61] "storage-provisioner" [591605b2-de7e-4dc1-903b-f8102ccc3770] Running
	I0924 19:51:52.347067   69904 system_pods.go:74] duration metric: took 183.595946ms to wait for pod list to return data ...
	I0924 19:51:52.347074   69904 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:51:52.542476   69904 default_sa.go:45] found service account: "default"
	I0924 19:51:52.542504   69904 default_sa.go:55] duration metric: took 195.421838ms for default service account to be created ...
	I0924 19:51:52.542514   69904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:51:52.747902   69904 system_pods.go:86] 9 kube-system pods found
	I0924 19:51:52.747936   69904 system_pods.go:89] "coredns-7c65d6cfc9-87t62" [b4be73eb-defb-4cc1-84f7-d34dccab4a2c] Running
	I0924 19:51:52.747943   69904 system_pods.go:89] "coredns-7c65d6cfc9-nzssp" [ecf276cd-9aa0-4a0b-81b6-da38271d10ed] Running
	I0924 19:51:52.747950   69904 system_pods.go:89] "etcd-default-k8s-diff-port-093771" [809f2c90-7cfc-4c77-a078-7883a7c6f2ac] Running
	I0924 19:51:52.747955   69904 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-093771" [2d297125-52bd-4c17-ab57-89911bb046e7] Running
	I0924 19:51:52.747961   69904 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-093771" [9e3c3d16-5e5d-4ebf-9ade-24cb40b9e836] Running
	I0924 19:51:52.747966   69904 system_pods.go:89] "kube-proxy-5rw7b" [f2916b6c-1a6f-4766-8543-0d846f559710] Running
	I0924 19:51:52.747971   69904 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-093771" [d1db09ad-d2e9-4453-b354-379bbb4081bf] Running
	I0924 19:51:52.747981   69904 system_pods.go:89] "metrics-server-6867b74b74-gnlkd" [a3b6c4f7-47e1-48a3-adff-1690db5cea3b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:52.747988   69904 system_pods.go:89] "storage-provisioner" [591605b2-de7e-4dc1-903b-f8102ccc3770] Running
	I0924 19:51:52.748002   69904 system_pods.go:126] duration metric: took 205.481542ms to wait for k8s-apps to be running ...
	I0924 19:51:52.748010   69904 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:51:52.748069   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:52.763092   69904 system_svc.go:56] duration metric: took 15.071727ms WaitForService to wait for kubelet
	I0924 19:51:52.763121   69904 kubeadm.go:582] duration metric: took 11.298674484s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:51:52.763141   69904 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:51:52.942890   69904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:51:52.942915   69904 node_conditions.go:123] node cpu capacity is 2
	I0924 19:51:52.942925   69904 node_conditions.go:105] duration metric: took 179.779826ms to run NodePressure ...
	I0924 19:51:52.942935   69904 start.go:241] waiting for startup goroutines ...
	I0924 19:51:52.942941   69904 start.go:246] waiting for cluster config update ...
	I0924 19:51:52.942951   69904 start.go:255] writing updated cluster config ...
	I0924 19:51:52.943201   69904 ssh_runner.go:195] Run: rm -f paused
	I0924 19:51:52.992952   69904 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:51:52.995076   69904 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-093771" cluster and "default" namespace by default
	I0924 19:51:55.865870   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:51:55.866074   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:52:02.110619   69408 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.160686078s)
	I0924 19:52:02.110702   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:52:02.124706   69408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:52:02.133983   69408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:52:02.142956   69408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:52:02.142980   69408 kubeadm.go:157] found existing configuration files:
	
	I0924 19:52:02.143027   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:52:02.151037   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:52:02.151101   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:52:02.160469   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:52:02.168827   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:52:02.168889   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:52:02.177644   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:52:02.186999   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:52:02.187064   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:52:02.195935   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:52:02.204688   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:52:02.204763   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:52:02.213564   69408 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:52:02.259426   69408 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 19:52:02.259587   69408 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:52:02.355605   69408 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:52:02.355774   69408 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:52:02.355928   69408 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 19:52:02.363355   69408 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:52:02.365307   69408 out.go:235]   - Generating certificates and keys ...
	I0924 19:52:02.365423   69408 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:52:02.365526   69408 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:52:02.365688   69408 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:52:02.365773   69408 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:52:02.365879   69408 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:52:02.365955   69408 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:52:02.366061   69408 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:52:02.366149   69408 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:52:02.366257   69408 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:52:02.366362   69408 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:52:02.366417   69408 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:52:02.366502   69408 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:52:02.551857   69408 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:52:02.836819   69408 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 19:52:03.096479   69408 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:52:03.209489   69408 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:52:03.274701   69408 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:52:03.275214   69408 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:52:03.277917   69408 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:52:03.279804   69408 out.go:235]   - Booting up control plane ...
	I0924 19:52:03.279909   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:52:03.280022   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:52:03.280130   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:52:03.297451   69408 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:52:03.304789   69408 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:52:03.304840   69408 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:52:03.423280   69408 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 19:52:03.423394   69408 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 19:52:03.925128   69408 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.985266ms
	I0924 19:52:03.925262   69408 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 19:52:05.866171   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:52:05.866441   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:52:08.429070   69408 kubeadm.go:310] [api-check] The API server is healthy after 4.502084393s
	I0924 19:52:08.439108   69408 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 19:52:08.455261   69408 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 19:52:08.479883   69408 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 19:52:08.480145   69408 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-311319 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 19:52:08.490294   69408 kubeadm.go:310] [bootstrap-token] Using token: ugx0qk.6i7lm67tfu0foozy
	I0924 19:52:08.491600   69408 out.go:235]   - Configuring RBAC rules ...
	I0924 19:52:08.491741   69408 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 19:52:08.496142   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 19:52:08.502704   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 19:52:08.508752   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 19:52:08.512088   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 19:52:08.515855   69408 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 19:52:08.837286   69408 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 19:52:09.278937   69408 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 19:52:09.835442   69408 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 19:52:09.836889   69408 kubeadm.go:310] 
	I0924 19:52:09.836953   69408 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 19:52:09.836967   69408 kubeadm.go:310] 
	I0924 19:52:09.837040   69408 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 19:52:09.837048   69408 kubeadm.go:310] 
	I0924 19:52:09.837068   69408 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 19:52:09.837117   69408 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 19:52:09.837167   69408 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 19:52:09.837174   69408 kubeadm.go:310] 
	I0924 19:52:09.837238   69408 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 19:52:09.837246   69408 kubeadm.go:310] 
	I0924 19:52:09.837297   69408 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 19:52:09.837307   69408 kubeadm.go:310] 
	I0924 19:52:09.837371   69408 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 19:52:09.837490   69408 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 19:52:09.837611   69408 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 19:52:09.837630   69408 kubeadm.go:310] 
	I0924 19:52:09.837706   69408 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 19:52:09.837774   69408 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 19:52:09.837780   69408 kubeadm.go:310] 
	I0924 19:52:09.837851   69408 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ugx0qk.6i7lm67tfu0foozy \
	I0924 19:52:09.837951   69408 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 19:52:09.837979   69408 kubeadm.go:310] 	--control-plane 
	I0924 19:52:09.837992   69408 kubeadm.go:310] 
	I0924 19:52:09.838087   69408 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 19:52:09.838100   69408 kubeadm.go:310] 
	I0924 19:52:09.838204   69408 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ugx0qk.6i7lm67tfu0foozy \
	I0924 19:52:09.838325   69408 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 19:52:09.839629   69408 kubeadm.go:310] W0924 19:52:02.243473    2529 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:52:09.839919   69408 kubeadm.go:310] W0924 19:52:02.244730    2529 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:52:09.840040   69408 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:52:09.840056   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:52:09.840067   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:52:09.842039   69408 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:52:09.843562   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:52:09.855620   69408 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:52:09.873291   69408 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:52:09.873381   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:09.873401   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-311319 minikube.k8s.io/updated_at=2024_09_24T19_52_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=embed-certs-311319 minikube.k8s.io/primary=true
	I0924 19:52:09.898351   69408 ops.go:34] apiserver oom_adj: -16
	I0924 19:52:10.043641   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:10.544445   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:11.043725   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:11.543862   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:12.043769   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:12.543723   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:13.044577   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:13.544545   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.043885   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.544454   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.663140   69408 kubeadm.go:1113] duration metric: took 4.789827964s to wait for elevateKubeSystemPrivileges
	I0924 19:52:14.663181   69408 kubeadm.go:394] duration metric: took 4m55.527467072s to StartCluster
	I0924 19:52:14.663202   69408 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:52:14.663295   69408 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:52:14.665852   69408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:52:14.666123   69408 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:52:14.666181   69408 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:52:14.666281   69408 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-311319"
	I0924 19:52:14.666302   69408 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-311319"
	I0924 19:52:14.666298   69408 addons.go:69] Setting default-storageclass=true in profile "embed-certs-311319"
	W0924 19:52:14.666315   69408 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:52:14.666324   69408 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-311319"
	I0924 19:52:14.666347   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.666357   69408 config.go:182] Loaded profile config "embed-certs-311319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:52:14.666407   69408 addons.go:69] Setting metrics-server=true in profile "embed-certs-311319"
	I0924 19:52:14.666424   69408 addons.go:234] Setting addon metrics-server=true in "embed-certs-311319"
	W0924 19:52:14.666432   69408 addons.go:243] addon metrics-server should already be in state true
	I0924 19:52:14.666462   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.666762   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666766   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666803   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.666863   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.666899   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666937   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.667748   69408 out.go:177] * Verifying Kubernetes components...
	I0924 19:52:14.669166   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:52:14.684612   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39209
	I0924 19:52:14.684876   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0924 19:52:14.685146   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.685266   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.685645   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.685662   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.685689   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35475
	I0924 19:52:14.685786   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.685806   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.686027   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.686034   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.686125   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.686517   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.686559   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.686617   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.686617   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.686638   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.686666   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.687118   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.687348   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.690029   69408 addons.go:234] Setting addon default-storageclass=true in "embed-certs-311319"
	W0924 19:52:14.690047   69408 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:52:14.690067   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.690357   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.690389   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.705119   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0924 19:52:14.705473   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42153
	I0924 19:52:14.705613   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.705983   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.706260   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.706283   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.706433   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.706458   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.706673   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.706793   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.706937   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.706989   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.708118   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I0924 19:52:14.708552   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.708751   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.709269   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.709288   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.709312   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.709894   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.710364   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.710405   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.710737   69408 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:52:14.710846   69408 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:52:14.711925   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:52:14.711937   69408 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:52:14.711951   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.712493   69408 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:52:14.712506   69408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:52:14.712521   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.716365   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716390   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.716402   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716511   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.716639   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.716738   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.716763   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716820   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.717468   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.717490   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.717691   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.717856   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.718038   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.718356   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.729081   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43771
	I0924 19:52:14.729516   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.730022   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.730040   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.730363   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.730541   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.732272   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.732526   69408 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:52:14.732545   69408 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:52:14.732564   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.735618   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.736196   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.736220   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.736269   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.736499   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.736675   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.736823   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.869932   69408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:52:14.906644   69408 node_ready.go:35] waiting up to 6m0s for node "embed-certs-311319" to be "Ready" ...
	I0924 19:52:14.914856   69408 node_ready.go:49] node "embed-certs-311319" has status "Ready":"True"
	I0924 19:52:14.914884   69408 node_ready.go:38] duration metric: took 8.205314ms for node "embed-certs-311319" to be "Ready" ...
	I0924 19:52:14.914893   69408 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:52:14.919969   69408 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:15.014078   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:52:15.014101   69408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:52:15.052737   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:52:15.064467   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:52:15.065858   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:52:15.065877   69408 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:52:15.137882   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:52:15.137902   69408 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:52:15.222147   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:52:15.331245   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.331279   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.331622   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.331647   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.331656   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.331664   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.331624   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:15.331894   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.331910   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.331898   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:15.339921   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.339937   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.340159   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.340203   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.340235   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.048748   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.048769   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.049094   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.049133   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.049144   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.049152   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.049159   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.049489   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.049524   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.049544   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.149500   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.149522   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.149817   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.149877   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.149903   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.149917   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.149926   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.150145   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.150159   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.150182   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.150191   69408 addons.go:475] Verifying addon metrics-server=true in "embed-certs-311319"
	I0924 19:52:16.151648   69408 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0924 19:52:16.153171   69408 addons.go:510] duration metric: took 1.486993032s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0924 19:52:16.925437   69408 pod_ready.go:103] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"False"
	I0924 19:52:18.926343   69408 pod_ready.go:103] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"False"
	I0924 19:52:20.928047   69408 pod_ready.go:93] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.928068   69408 pod_ready.go:82] duration metric: took 6.008077725s for pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.928076   69408 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.933100   69408 pod_ready.go:93] pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.933119   69408 pod_ready.go:82] duration metric: took 5.035858ms for pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.933127   69408 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.938200   69408 pod_ready.go:93] pod "etcd-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.938215   69408 pod_ready.go:82] duration metric: took 5.082837ms for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.938223   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.942124   69408 pod_ready.go:93] pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.942143   69408 pod_ready.go:82] duration metric: took 3.912415ms for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.942154   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.946306   69408 pod_ready.go:93] pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.946323   69408 pod_ready.go:82] duration metric: took 4.162782ms for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.946330   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h42s7" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.323768   69408 pod_ready.go:93] pod "kube-proxy-h42s7" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:21.323794   69408 pod_ready.go:82] duration metric: took 377.456852ms for pod "kube-proxy-h42s7" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.323806   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.723714   69408 pod_ready.go:93] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:21.723742   69408 pod_ready.go:82] duration metric: took 399.928048ms for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.723752   69408 pod_ready.go:39] duration metric: took 6.808848583s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:52:21.723769   69408 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:52:21.723835   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:52:21.738273   69408 api_server.go:72] duration metric: took 7.072120167s to wait for apiserver process to appear ...
	I0924 19:52:21.738301   69408 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:52:21.738353   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:52:21.743391   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 200:
	ok
	I0924 19:52:21.744346   69408 api_server.go:141] control plane version: v1.31.1
	I0924 19:52:21.744361   69408 api_server.go:131] duration metric: took 6.053884ms to wait for apiserver health ...
	I0924 19:52:21.744368   69408 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:52:21.926453   69408 system_pods.go:59] 9 kube-system pods found
	I0924 19:52:21.926485   69408 system_pods.go:61] "coredns-7c65d6cfc9-jsvdk" [da741136-c1ce-436f-9df0-e447b067265f] Running
	I0924 19:52:21.926493   69408 system_pods.go:61] "coredns-7c65d6cfc9-qgfvt" [7e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74] Running
	I0924 19:52:21.926499   69408 system_pods.go:61] "etcd-embed-certs-311319" [543c64c6-453b-4d42-b6a8-5b25577b3b8a] Running
	I0924 19:52:21.926505   69408 system_pods.go:61] "kube-apiserver-embed-certs-311319" [c1cd4c65-07a6-4d53-8f1d-438a8efdcdfa] Running
	I0924 19:52:21.926510   69408 system_pods.go:61] "kube-controller-manager-embed-certs-311319" [eece1531-5f24-4853-9e91-ca29558f3b9d] Running
	I0924 19:52:21.926517   69408 system_pods.go:61] "kube-proxy-h42s7" [76930a49-6a8a-4d02-84b8-8e26f3196ac3] Running
	I0924 19:52:21.926522   69408 system_pods.go:61] "kube-scheduler-embed-certs-311319" [22d20361-552d-4443-bec2-e406919d2966] Running
	I0924 19:52:21.926531   69408 system_pods.go:61] "metrics-server-6867b74b74-xnwm4" [dc64f26b-e4a6-4692-83d5-e6c794c1b130] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:52:21.926540   69408 system_pods.go:61] "storage-provisioner" [766bdfe2-684a-47de-94fd-088795b60e2b] Running
	I0924 19:52:21.926551   69408 system_pods.go:74] duration metric: took 182.176397ms to wait for pod list to return data ...
	I0924 19:52:21.926562   69408 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:52:22.123871   69408 default_sa.go:45] found service account: "default"
	I0924 19:52:22.123896   69408 default_sa.go:55] duration metric: took 197.328478ms for default service account to be created ...
	I0924 19:52:22.123911   69408 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:52:22.327585   69408 system_pods.go:86] 9 kube-system pods found
	I0924 19:52:22.327616   69408 system_pods.go:89] "coredns-7c65d6cfc9-jsvdk" [da741136-c1ce-436f-9df0-e447b067265f] Running
	I0924 19:52:22.327625   69408 system_pods.go:89] "coredns-7c65d6cfc9-qgfvt" [7e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74] Running
	I0924 19:52:22.327630   69408 system_pods.go:89] "etcd-embed-certs-311319" [543c64c6-453b-4d42-b6a8-5b25577b3b8a] Running
	I0924 19:52:22.327636   69408 system_pods.go:89] "kube-apiserver-embed-certs-311319" [c1cd4c65-07a6-4d53-8f1d-438a8efdcdfa] Running
	I0924 19:52:22.327641   69408 system_pods.go:89] "kube-controller-manager-embed-certs-311319" [eece1531-5f24-4853-9e91-ca29558f3b9d] Running
	I0924 19:52:22.327647   69408 system_pods.go:89] "kube-proxy-h42s7" [76930a49-6a8a-4d02-84b8-8e26f3196ac3] Running
	I0924 19:52:22.327652   69408 system_pods.go:89] "kube-scheduler-embed-certs-311319" [22d20361-552d-4443-bec2-e406919d2966] Running
	I0924 19:52:22.327662   69408 system_pods.go:89] "metrics-server-6867b74b74-xnwm4" [dc64f26b-e4a6-4692-83d5-e6c794c1b130] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:52:22.327671   69408 system_pods.go:89] "storage-provisioner" [766bdfe2-684a-47de-94fd-088795b60e2b] Running
	I0924 19:52:22.327680   69408 system_pods.go:126] duration metric: took 203.762675ms to wait for k8s-apps to be running ...
	I0924 19:52:22.327687   69408 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:52:22.327741   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:52:22.340873   69408 system_svc.go:56] duration metric: took 13.177605ms WaitForService to wait for kubelet
	I0924 19:52:22.340903   69408 kubeadm.go:582] duration metric: took 7.674755249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:52:22.340925   69408 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:52:22.524647   69408 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:52:22.524670   69408 node_conditions.go:123] node cpu capacity is 2
	I0924 19:52:22.524679   69408 node_conditions.go:105] duration metric: took 183.74973ms to run NodePressure ...
	I0924 19:52:22.524688   69408 start.go:241] waiting for startup goroutines ...
	I0924 19:52:22.524695   69408 start.go:246] waiting for cluster config update ...
	I0924 19:52:22.524705   69408 start.go:255] writing updated cluster config ...
	I0924 19:52:22.524994   69408 ssh_runner.go:195] Run: rm -f paused
	I0924 19:52:22.571765   69408 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:52:22.574724   69408 out.go:177] * Done! kubectl is now configured to use "embed-certs-311319" cluster and "default" namespace by default
	I0924 19:52:25.866986   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:52:25.867227   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:05.868563   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:05.868798   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:05.868811   70152 kubeadm.go:310] 
	I0924 19:53:05.868866   70152 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:53:05.868927   70152 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:53:05.868936   70152 kubeadm.go:310] 
	I0924 19:53:05.868989   70152 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:53:05.869037   70152 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:53:05.869201   70152 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:53:05.869212   70152 kubeadm.go:310] 
	I0924 19:53:05.869332   70152 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:53:05.869380   70152 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:53:05.869433   70152 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:53:05.869442   70152 kubeadm.go:310] 
	I0924 19:53:05.869555   70152 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:53:05.869664   70152 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:53:05.869674   70152 kubeadm.go:310] 
	I0924 19:53:05.869792   70152 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:53:05.869900   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:53:05.870003   70152 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:53:05.870132   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:53:05.870172   70152 kubeadm.go:310] 
	I0924 19:53:05.870425   70152 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:53:05.870536   70152 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:53:05.870658   70152 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 19:53:05.870869   70152 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 19:53:05.870918   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:53:06.301673   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:53:06.316103   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:53:06.326362   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:53:06.326396   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:53:06.326454   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:53:06.334687   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:53:06.334744   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:53:06.344175   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:53:06.352663   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:53:06.352725   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:53:06.361955   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:53:06.370584   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:53:06.370625   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:53:06.379590   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:53:06.388768   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:53:06.388825   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:53:06.397242   70152 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:53:06.469463   70152 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:53:06.469547   70152 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:53:06.606743   70152 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:53:06.606900   70152 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:53:06.607021   70152 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:53:06.778104   70152 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:53:06.780036   70152 out.go:235]   - Generating certificates and keys ...
	I0924 19:53:06.780148   70152 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:53:06.780241   70152 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:53:06.780359   70152 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:53:06.780451   70152 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:53:06.780578   70152 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:53:06.780654   70152 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:53:06.780753   70152 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:53:06.780852   70152 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:53:06.780972   70152 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:53:06.781119   70152 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:53:06.781178   70152 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:53:06.781254   70152 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:53:06.836315   70152 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:53:06.938657   70152 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:53:07.273070   70152 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:53:07.347309   70152 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:53:07.369112   70152 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:53:07.369777   70152 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:53:07.369866   70152 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:53:07.504122   70152 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:53:07.506006   70152 out.go:235]   - Booting up control plane ...
	I0924 19:53:07.506117   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:53:07.509132   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:53:07.509998   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:53:07.510662   70152 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:53:07.513856   70152 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:53:47.515377   70152 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:53:47.515684   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:47.515976   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:52.516646   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:52.516842   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:54:02.517539   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:54:02.517808   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:54:22.518364   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:54:22.518605   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:55:02.517378   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:55:02.517642   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:55:02.517672   70152 kubeadm.go:310] 
	I0924 19:55:02.517732   70152 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:55:02.517791   70152 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:55:02.517802   70152 kubeadm.go:310] 
	I0924 19:55:02.517880   70152 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:55:02.517943   70152 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:55:02.518090   70152 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:55:02.518102   70152 kubeadm.go:310] 
	I0924 19:55:02.518239   70152 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:55:02.518289   70152 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:55:02.518347   70152 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:55:02.518358   70152 kubeadm.go:310] 
	I0924 19:55:02.518488   70152 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:55:02.518565   70152 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:55:02.518572   70152 kubeadm.go:310] 
	I0924 19:55:02.518685   70152 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:55:02.518768   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:55:02.518891   70152 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:55:02.518991   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:55:02.519010   70152 kubeadm.go:310] 
	I0924 19:55:02.519626   70152 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:55:02.519745   70152 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:55:02.519839   70152 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 19:55:02.519914   70152 kubeadm.go:394] duration metric: took 8m1.249852968s to StartCluster
	I0924 19:55:02.519952   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:55:02.520008   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:55:02.552844   70152 cri.go:89] found id: ""
	I0924 19:55:02.552880   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.552891   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:55:02.552899   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:55:02.552956   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:55:02.582811   70152 cri.go:89] found id: ""
	I0924 19:55:02.582858   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.582869   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:55:02.582876   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:55:02.582929   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:55:02.614815   70152 cri.go:89] found id: ""
	I0924 19:55:02.614858   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.614868   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:55:02.614874   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:55:02.614920   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:55:02.644953   70152 cri.go:89] found id: ""
	I0924 19:55:02.644982   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.644991   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:55:02.644998   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:55:02.645053   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:55:02.680419   70152 cri.go:89] found id: ""
	I0924 19:55:02.680448   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.680458   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:55:02.680466   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:55:02.680525   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:55:02.713021   70152 cri.go:89] found id: ""
	I0924 19:55:02.713043   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.713051   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:55:02.713057   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:55:02.713118   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:55:02.748326   70152 cri.go:89] found id: ""
	I0924 19:55:02.748350   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.748358   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:55:02.748364   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:55:02.748416   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:55:02.780489   70152 cri.go:89] found id: ""
	I0924 19:55:02.780523   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.780546   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:55:02.780558   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:55:02.780572   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:55:02.830514   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:55:02.830550   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:55:02.845321   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:55:02.845349   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:55:02.909352   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:55:02.909383   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:55:02.909399   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:55:03.033937   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:55:03.033972   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0924 19:55:03.070531   70152 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 19:55:03.070611   70152 out.go:270] * 
	W0924 19:55:03.070682   70152 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:55:03.070701   70152 out.go:270] * 
	W0924 19:55:03.071559   70152 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 19:55:03.074921   70152 out.go:201] 
	W0924 19:55:03.076106   70152 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:55:03.076150   70152 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 19:55:03.076180   70152 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 19:55:03.077787   70152 out.go:201] 
	
	
	==> CRI-O <==
	Sep 24 20:00:54 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:54.986371242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208054986338724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=190421b1-ef5a-4dc8-b3af-16ced2dc2c87 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:00:54 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:54.986938661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e161c98-82eb-4919-a39b-a5fddf043de4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:54 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:54.987013894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e161c98-82eb-4919-a39b-a5fddf043de4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:54 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:54.987317309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b6f65eec9f0c856f644d68a54155ea53d7ba0b3c434a007ea245837106df31d,PodSandboxId:cf49c730126f643a8a6dd5613a5cca00ca5451e8cb1349320266552693f434fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207503369894604,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 591605b2-de7e-4dc1-903b-f8102ccc3770,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05c709fa2730944d3173932bdd1af233ff8b990def81020eb63ee86fe32998d,PodSandboxId:fac66c2115436eca468dfe2df0b7552e909963fb2629c9b05328fa60c1eb1429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207502851008630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzssp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecf276cd-9aa0-4a0b-81b6-da38271d10ed,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb4369fc1e40810036541e157b9cf7ae4c35088c5d16d996c74baff6dc4bfd7,PodSandboxId:189511ecad721347bcdd29ba2d05a1752862cdbb96a72cf5123d00b4f409a06e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207502729510259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-87t62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b4be73eb-defb-4cc1-84f7-d34dccab4a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c77eb695dfea4ab3ef6fc3c580b15f8514469dfb763e71312cd0d8af5220b4,PodSandboxId:5ffc9af5a09ca02df9e225b28402cd2836c732b719c53140d257d07370e00499,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727207502133542728,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rw7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2916b6c-1a6f-4766-8543-0d846f559710,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ab49acc4ac79d8659cb62284f7467547eb4df2913391aed631e3f188dcc002,PodSandboxId:e8cac2baea09058ca0128707233d093926e9c131364d612ce42be4c8ad76189a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172720749146115744
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62de2daaf8dcc4fd39e199dadfa7cd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6ac8738592cf78c03eaa7ce93a5be3ee513801aef2e0e2ac506e5ec35e0faa,PodSandboxId:f22a07a10b746d1bb97d6836279f30266bdd6ad8d9fa270d11410225ea015ac3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17272074914
60897259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8463196c29ee74ccc6f7e94a4077ef38,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1c1d2106c8b28f603f7566861a48a593e9d1ae6c35e0bf44e73e504b1bf94f,PodSandboxId:d747699bf5a8abc81e9e969157f2e97051080b1170dd4664427ab5f86497008f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17272
07491384520046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e1955887103edb8159ea2696a6d8e57,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac621738ad1f0426836abac76c909ee1f89612ef5da06efddefce95137669ac3,PodSandboxId:e3abe44660030f1106549bdad21b9b6c675e95b70013c8621e653c1b2d805397,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207491339991882,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58152b24003559a219bc8b89415a2309d822726f957c2977aedfad7d8aea0c8d,PodSandboxId:a4917d1a9dab242c5a1b0f0dd14e1cc9750e3564aafc47b20188433d616cb9e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727207205036856114,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e161c98-82eb-4919-a39b-a5fddf043de4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.027481815Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d1edf5e5-7561-4707-9685-adbe4a149932 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.027639672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1edf5e5-7561-4707-9685-adbe4a149932 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.028864893Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a734217-c586-4f18-bb39-50167502035f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.029397525Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208055029373364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a734217-c586-4f18-bb39-50167502035f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.030068568Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae8f3d73-ee87-44b8-bcad-62b132841ad2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.030150175Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae8f3d73-ee87-44b8-bcad-62b132841ad2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.030412100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b6f65eec9f0c856f644d68a54155ea53d7ba0b3c434a007ea245837106df31d,PodSandboxId:cf49c730126f643a8a6dd5613a5cca00ca5451e8cb1349320266552693f434fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207503369894604,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 591605b2-de7e-4dc1-903b-f8102ccc3770,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05c709fa2730944d3173932bdd1af233ff8b990def81020eb63ee86fe32998d,PodSandboxId:fac66c2115436eca468dfe2df0b7552e909963fb2629c9b05328fa60c1eb1429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207502851008630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzssp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecf276cd-9aa0-4a0b-81b6-da38271d10ed,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb4369fc1e40810036541e157b9cf7ae4c35088c5d16d996c74baff6dc4bfd7,PodSandboxId:189511ecad721347bcdd29ba2d05a1752862cdbb96a72cf5123d00b4f409a06e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207502729510259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-87t62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b4be73eb-defb-4cc1-84f7-d34dccab4a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c77eb695dfea4ab3ef6fc3c580b15f8514469dfb763e71312cd0d8af5220b4,PodSandboxId:5ffc9af5a09ca02df9e225b28402cd2836c732b719c53140d257d07370e00499,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727207502133542728,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rw7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2916b6c-1a6f-4766-8543-0d846f559710,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ab49acc4ac79d8659cb62284f7467547eb4df2913391aed631e3f188dcc002,PodSandboxId:e8cac2baea09058ca0128707233d093926e9c131364d612ce42be4c8ad76189a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172720749146115744
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62de2daaf8dcc4fd39e199dadfa7cd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6ac8738592cf78c03eaa7ce93a5be3ee513801aef2e0e2ac506e5ec35e0faa,PodSandboxId:f22a07a10b746d1bb97d6836279f30266bdd6ad8d9fa270d11410225ea015ac3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17272074914
60897259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8463196c29ee74ccc6f7e94a4077ef38,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1c1d2106c8b28f603f7566861a48a593e9d1ae6c35e0bf44e73e504b1bf94f,PodSandboxId:d747699bf5a8abc81e9e969157f2e97051080b1170dd4664427ab5f86497008f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17272
07491384520046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e1955887103edb8159ea2696a6d8e57,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac621738ad1f0426836abac76c909ee1f89612ef5da06efddefce95137669ac3,PodSandboxId:e3abe44660030f1106549bdad21b9b6c675e95b70013c8621e653c1b2d805397,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207491339991882,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58152b24003559a219bc8b89415a2309d822726f957c2977aedfad7d8aea0c8d,PodSandboxId:a4917d1a9dab242c5a1b0f0dd14e1cc9750e3564aafc47b20188433d616cb9e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727207205036856114,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae8f3d73-ee87-44b8-bcad-62b132841ad2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.071034842Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44caa144-307b-4ee4-8b01-a45443ab8cb6 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.071141274Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44caa144-307b-4ee4-8b01-a45443ab8cb6 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.072396951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ca7f02b-23b4-458c-b668-a71ded9fb6d0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.073090575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208055073059502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ca7f02b-23b4-458c-b668-a71ded9fb6d0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.074006166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fcdb1da0-4416-4b8b-b36a-7930054aae72 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.074137871Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fcdb1da0-4416-4b8b-b36a-7930054aae72 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.075410705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b6f65eec9f0c856f644d68a54155ea53d7ba0b3c434a007ea245837106df31d,PodSandboxId:cf49c730126f643a8a6dd5613a5cca00ca5451e8cb1349320266552693f434fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207503369894604,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 591605b2-de7e-4dc1-903b-f8102ccc3770,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05c709fa2730944d3173932bdd1af233ff8b990def81020eb63ee86fe32998d,PodSandboxId:fac66c2115436eca468dfe2df0b7552e909963fb2629c9b05328fa60c1eb1429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207502851008630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzssp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecf276cd-9aa0-4a0b-81b6-da38271d10ed,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb4369fc1e40810036541e157b9cf7ae4c35088c5d16d996c74baff6dc4bfd7,PodSandboxId:189511ecad721347bcdd29ba2d05a1752862cdbb96a72cf5123d00b4f409a06e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207502729510259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-87t62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b4be73eb-defb-4cc1-84f7-d34dccab4a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c77eb695dfea4ab3ef6fc3c580b15f8514469dfb763e71312cd0d8af5220b4,PodSandboxId:5ffc9af5a09ca02df9e225b28402cd2836c732b719c53140d257d07370e00499,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727207502133542728,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rw7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2916b6c-1a6f-4766-8543-0d846f559710,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ab49acc4ac79d8659cb62284f7467547eb4df2913391aed631e3f188dcc002,PodSandboxId:e8cac2baea09058ca0128707233d093926e9c131364d612ce42be4c8ad76189a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172720749146115744
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62de2daaf8dcc4fd39e199dadfa7cd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6ac8738592cf78c03eaa7ce93a5be3ee513801aef2e0e2ac506e5ec35e0faa,PodSandboxId:f22a07a10b746d1bb97d6836279f30266bdd6ad8d9fa270d11410225ea015ac3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17272074914
60897259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8463196c29ee74ccc6f7e94a4077ef38,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1c1d2106c8b28f603f7566861a48a593e9d1ae6c35e0bf44e73e504b1bf94f,PodSandboxId:d747699bf5a8abc81e9e969157f2e97051080b1170dd4664427ab5f86497008f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17272
07491384520046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e1955887103edb8159ea2696a6d8e57,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac621738ad1f0426836abac76c909ee1f89612ef5da06efddefce95137669ac3,PodSandboxId:e3abe44660030f1106549bdad21b9b6c675e95b70013c8621e653c1b2d805397,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207491339991882,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58152b24003559a219bc8b89415a2309d822726f957c2977aedfad7d8aea0c8d,PodSandboxId:a4917d1a9dab242c5a1b0f0dd14e1cc9750e3564aafc47b20188433d616cb9e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727207205036856114,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fcdb1da0-4416-4b8b-b36a-7930054aae72 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.115257916Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb063b05-bc16-411d-9135-378a8dcb556e name=/runtime.v1.RuntimeService/Version
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.115518225Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb063b05-bc16-411d-9135-378a8dcb556e name=/runtime.v1.RuntimeService/Version
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.116998125Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd112f96-a53d-40a1-a443-ca3b08c036b5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.117536067Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208055117507903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd112f96-a53d-40a1-a443-ca3b08c036b5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.118141884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8589b3ca-e5f6-4fd7-9090-e096485f48b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.118232009Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8589b3ca-e5f6-4fd7-9090-e096485f48b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:00:55 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:00:55.118496907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b6f65eec9f0c856f644d68a54155ea53d7ba0b3c434a007ea245837106df31d,PodSandboxId:cf49c730126f643a8a6dd5613a5cca00ca5451e8cb1349320266552693f434fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207503369894604,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 591605b2-de7e-4dc1-903b-f8102ccc3770,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05c709fa2730944d3173932bdd1af233ff8b990def81020eb63ee86fe32998d,PodSandboxId:fac66c2115436eca468dfe2df0b7552e909963fb2629c9b05328fa60c1eb1429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207502851008630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzssp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecf276cd-9aa0-4a0b-81b6-da38271d10ed,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb4369fc1e40810036541e157b9cf7ae4c35088c5d16d996c74baff6dc4bfd7,PodSandboxId:189511ecad721347bcdd29ba2d05a1752862cdbb96a72cf5123d00b4f409a06e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207502729510259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-87t62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b4be73eb-defb-4cc1-84f7-d34dccab4a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c77eb695dfea4ab3ef6fc3c580b15f8514469dfb763e71312cd0d8af5220b4,PodSandboxId:5ffc9af5a09ca02df9e225b28402cd2836c732b719c53140d257d07370e00499,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727207502133542728,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rw7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2916b6c-1a6f-4766-8543-0d846f559710,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ab49acc4ac79d8659cb62284f7467547eb4df2913391aed631e3f188dcc002,PodSandboxId:e8cac2baea09058ca0128707233d093926e9c131364d612ce42be4c8ad76189a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172720749146115744
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62de2daaf8dcc4fd39e199dadfa7cd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6ac8738592cf78c03eaa7ce93a5be3ee513801aef2e0e2ac506e5ec35e0faa,PodSandboxId:f22a07a10b746d1bb97d6836279f30266bdd6ad8d9fa270d11410225ea015ac3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17272074914
60897259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8463196c29ee74ccc6f7e94a4077ef38,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1c1d2106c8b28f603f7566861a48a593e9d1ae6c35e0bf44e73e504b1bf94f,PodSandboxId:d747699bf5a8abc81e9e969157f2e97051080b1170dd4664427ab5f86497008f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17272
07491384520046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e1955887103edb8159ea2696a6d8e57,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac621738ad1f0426836abac76c909ee1f89612ef5da06efddefce95137669ac3,PodSandboxId:e3abe44660030f1106549bdad21b9b6c675e95b70013c8621e653c1b2d805397,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207491339991882,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58152b24003559a219bc8b89415a2309d822726f957c2977aedfad7d8aea0c8d,PodSandboxId:a4917d1a9dab242c5a1b0f0dd14e1cc9750e3564aafc47b20188433d616cb9e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727207205036856114,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8589b3ca-e5f6-4fd7-9090-e096485f48b6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1b6f65eec9f0c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   cf49c730126f6       storage-provisioner
	d05c709fa2730       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   fac66c2115436       coredns-7c65d6cfc9-nzssp
	3cb4369fc1e40       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   189511ecad721       coredns-7c65d6cfc9-87t62
	d9c77eb695dfe       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   5ffc9af5a09ca       kube-proxy-5rw7b
	32ab49acc4ac7       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   e8cac2baea090       kube-scheduler-default-k8s-diff-port-093771
	3e6ac8738592c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   f22a07a10b746       kube-controller-manager-default-k8s-diff-port-093771
	ed1c1d2106c8b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   d747699bf5a8a       etcd-default-k8s-diff-port-093771
	ac621738ad1f0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   e3abe44660030       kube-apiserver-default-k8s-diff-port-093771
	58152b2400355       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   a4917d1a9dab2       kube-apiserver-default-k8s-diff-port-093771
	
	
	==> coredns [3cb4369fc1e40810036541e157b9cf7ae4c35088c5d16d996c74baff6dc4bfd7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d05c709fa2730944d3173932bdd1af233ff8b990def81020eb63ee86fe32998d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-093771
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-093771
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=default-k8s-diff-port-093771
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T19_51_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 19:51:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-093771
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 20:00:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 19:56:52 +0000   Tue, 24 Sep 2024 19:51:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 19:56:52 +0000   Tue, 24 Sep 2024 19:51:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 19:56:52 +0000   Tue, 24 Sep 2024 19:51:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 19:56:52 +0000   Tue, 24 Sep 2024 19:51:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.116
	  Hostname:    default-k8s-diff-port-093771
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 44d371c36b3f412b9fb6d4d146e398ef
	  System UUID:                44d371c3-6b3f-412b-9fb6-d4d146e398ef
	  Boot ID:                    f9efba96-f43f-40dd-8bcf-03c6890f483b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-87t62                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 coredns-7c65d6cfc9-nzssp                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 etcd-default-k8s-diff-port-093771                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-093771             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-093771    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-proxy-5rw7b                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-scheduler-default-k8s-diff-port-093771             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 metrics-server-6867b74b74-gnlkd                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m13s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m12s  kube-proxy       
	  Normal  Starting                 9m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node default-k8s-diff-port-093771 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node default-k8s-diff-port-093771 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node default-k8s-diff-port-093771 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m15s  node-controller  Node default-k8s-diff-port-093771 event: Registered Node default-k8s-diff-port-093771 in Controller
	
	
	==> dmesg <==
	[  +0.047752] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037960] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.784276] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.853459] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.543282] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.500511] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.066876] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071532] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.209958] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.147921] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.310626] systemd-fstab-generator[696]: Ignoring "noauto" option for root device
	[  +4.142292] systemd-fstab-generator[787]: Ignoring "noauto" option for root device
	[  +1.913779] systemd-fstab-generator[905]: Ignoring "noauto" option for root device
	[  +0.058810] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.507566] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.178283] kauditd_printk_skb: 85 callbacks suppressed
	[Sep24 19:51] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.064068] systemd-fstab-generator[2582]: Ignoring "noauto" option for root device
	[  +4.688802] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.356125] systemd-fstab-generator[2898]: Ignoring "noauto" option for root device
	[  +5.365379] systemd-fstab-generator[3011]: Ignoring "noauto" option for root device
	[  +0.116364] kauditd_printk_skb: 14 callbacks suppressed
	[ +10.094216] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [ed1c1d2106c8b28f603f7566861a48a593e9d1ae6c35e0bf44e73e504b1bf94f] <==
	{"level":"info","ts":"2024-09-24T19:51:31.643129Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.116:2380"}
	{"level":"info","ts":"2024-09-24T19:51:31.643166Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.116:2380"}
	{"level":"info","ts":"2024-09-24T19:51:31.643003Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-24T19:51:31.646638Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"70e810c2542c58a7","initial-advertise-peer-urls":["https://192.168.50.116:2380"],"listen-peer-urls":["https://192.168.50.116:2380"],"advertise-client-urls":["https://192.168.50.116:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.116:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T19:51:31.646715Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-24T19:51:32.599903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70e810c2542c58a7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-24T19:51:32.599955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70e810c2542c58a7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-24T19:51:32.599990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70e810c2542c58a7 received MsgPreVoteResp from 70e810c2542c58a7 at term 1"}
	{"level":"info","ts":"2024-09-24T19:51:32.600003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70e810c2542c58a7 became candidate at term 2"}
	{"level":"info","ts":"2024-09-24T19:51:32.600009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70e810c2542c58a7 received MsgVoteResp from 70e810c2542c58a7 at term 2"}
	{"level":"info","ts":"2024-09-24T19:51:32.600017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70e810c2542c58a7 became leader at term 2"}
	{"level":"info","ts":"2024-09-24T19:51:32.600023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 70e810c2542c58a7 elected leader 70e810c2542c58a7 at term 2"}
	{"level":"info","ts":"2024-09-24T19:51:32.601377Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:51:32.602241Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"70e810c2542c58a7","local-member-attributes":"{Name:default-k8s-diff-port-093771 ClientURLs:[https://192.168.50.116:2379]}","request-path":"/0/members/70e810c2542c58a7/attributes","cluster-id":"938c7bbb9c530c74","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T19:51:32.602278Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:51:32.602642Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:51:32.603256Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:51:32.604025Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T19:51:32.604201Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T19:51:32.604226Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T19:51:32.604704Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:51:32.605372Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.116:2379"}
	{"level":"info","ts":"2024-09-24T19:51:32.605676Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"938c7bbb9c530c74","local-member-id":"70e810c2542c58a7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:51:32.605747Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:51:32.605778Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 20:00:55 up 14 min,  0 users,  load average: 0.35, 0.22, 0.13
	Linux default-k8s-diff-port-093771 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [58152b24003559a219bc8b89415a2309d822726f957c2977aedfad7d8aea0c8d] <==
	W0924 19:51:25.138321       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.191709       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.202230       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.277208       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.287975       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.307502       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.400804       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.404431       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.425166       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.453178       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.457541       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.486388       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.495454       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.505135       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.508746       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.509012       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.574106       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.643103       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.748840       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.794270       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.804044       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.843666       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.964297       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:26.002156       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:26.045876       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ac621738ad1f0426836abac76c909ee1f89612ef5da06efddefce95137669ac3] <==
	W0924 19:56:34.854869       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 19:56:34.854936       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 19:56:34.856068       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 19:56:34.856153       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 19:57:34.856276       1 handler_proxy.go:99] no RequestInfo found in the context
	W0924 19:57:34.856414       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 19:57:34.856545       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0924 19:57:34.856555       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 19:57:34.857818       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 19:57:34.857903       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 19:59:34.858676       1 handler_proxy.go:99] no RequestInfo found in the context
	W0924 19:59:34.858847       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 19:59:34.858901       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0924 19:59:34.858990       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 19:59:34.860067       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 19:59:34.860143       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3e6ac8738592cf78c03eaa7ce93a5be3ee513801aef2e0e2ac506e5ec35e0faa] <==
	E0924 19:55:40.723318       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:55:41.274473       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:56:10.729515       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:56:11.281500       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:56:40.735080       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:56:41.288473       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 19:56:52.870483       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-093771"
	E0924 19:57:10.740814       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:57:11.295311       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 19:57:30.452052       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="97.986µs"
	E0924 19:57:40.746417       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:57:41.302262       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 19:57:42.454211       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="94.889µs"
	E0924 19:58:10.752158       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:58:11.309427       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:58:40.757896       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:58:41.316859       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:59:10.762993       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:59:11.323468       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:59:40.768817       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:59:41.330209       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:00:10.774475       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:00:11.336912       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:00:40.780345       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:00:41.344107       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d9c77eb695dfea4ab3ef6fc3c580b15f8514469dfb763e71312cd0d8af5220b4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 19:51:42.593855       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 19:51:42.636427       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.116"]
	E0924 19:51:42.636510       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 19:51:42.991513       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 19:51:42.991628       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 19:51:42.991659       1 server_linux.go:169] "Using iptables Proxier"
	I0924 19:51:43.083795       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 19:51:43.084099       1 server.go:483] "Version info" version="v1.31.1"
	I0924 19:51:43.084129       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:51:43.103202       1 config.go:199] "Starting service config controller"
	I0924 19:51:43.111795       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 19:51:43.109690       1 config.go:105] "Starting endpoint slice config controller"
	I0924 19:51:43.111904       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 19:51:43.110215       1 config.go:328] "Starting node config controller"
	I0924 19:51:43.111937       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 19:51:43.211946       1 shared_informer.go:320] Caches are synced for service config
	I0924 19:51:43.212019       1 shared_informer.go:320] Caches are synced for node config
	I0924 19:51:43.212030       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [32ab49acc4ac79d8659cb62284f7467547eb4df2913391aed631e3f188dcc002] <==
	W0924 19:51:33.944440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 19:51:33.944800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:33.944479       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 19:51:33.944857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:34.796264       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 19:51:34.796740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:34.803847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 19:51:34.803942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:34.816917       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 19:51:34.817153       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 19:51:34.826844       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 19:51:34.826922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:34.884961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0924 19:51:34.885026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:34.900565       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 19:51:34.900740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:34.999001       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 19:51:34.999265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:35.049340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 19:51:35.049536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:35.063557       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 19:51:35.063742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:35.086541       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 19:51:35.086795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0924 19:51:37.912977       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 19:59:41 default-k8s-diff-port-093771 kubelet[2905]: E0924 19:59:41.438211    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gnlkd" podUID="a3b6c4f7-47e1-48a3-adff-1690db5cea3b"
	Sep 24 19:59:46 default-k8s-diff-port-093771 kubelet[2905]: E0924 19:59:46.573509    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207986573082866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:59:46 default-k8s-diff-port-093771 kubelet[2905]: E0924 19:59:46.573548    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207986573082866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:59:55 default-k8s-diff-port-093771 kubelet[2905]: E0924 19:59:55.438187    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gnlkd" podUID="a3b6c4f7-47e1-48a3-adff-1690db5cea3b"
	Sep 24 19:59:56 default-k8s-diff-port-093771 kubelet[2905]: E0924 19:59:56.574696    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207996574391942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 19:59:56 default-k8s-diff-port-093771 kubelet[2905]: E0924 19:59:56.575348    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727207996574391942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:06 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:00:06.581011    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208006579147827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:06 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:00:06.581501    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208006579147827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:08 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:00:08.438925    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gnlkd" podUID="a3b6c4f7-47e1-48a3-adff-1690db5cea3b"
	Sep 24 20:00:16 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:00:16.582697    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208016582445138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:16 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:00:16.582988    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208016582445138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:19 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:00:19.438631    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gnlkd" podUID="a3b6c4f7-47e1-48a3-adff-1690db5cea3b"
	Sep 24 20:00:26 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:00:26.584456    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208026583851780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:26 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:00:26.584911    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208026583851780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:32 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:00:32.438343    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gnlkd" podUID="a3b6c4f7-47e1-48a3-adff-1690db5cea3b"
	Sep 24 20:00:36 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:00:36.454912    2905 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 20:00:36 default-k8s-diff-port-093771 kubelet[2905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 20:00:36 default-k8s-diff-port-093771 kubelet[2905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 20:00:36 default-k8s-diff-port-093771 kubelet[2905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 20:00:36 default-k8s-diff-port-093771 kubelet[2905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 20:00:36 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:00:36.586192    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208036585814304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:36 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:00:36.586284    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208036585814304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:46 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:00:46.440703    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gnlkd" podUID="a3b6c4f7-47e1-48a3-adff-1690db5cea3b"
	Sep 24 20:00:46 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:00:46.590381    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208046588232808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:46 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:00:46.590485    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208046588232808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [1b6f65eec9f0c856f644d68a54155ea53d7ba0b3c434a007ea245837106df31d] <==
	I0924 19:51:43.455202       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 19:51:43.467011       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 19:51:43.467057       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 19:51:43.477854       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 19:51:43.477967       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-093771_5ac3a173-2daf-4b53-9ddc-5e9a1d5f3f56!
	I0924 19:51:43.481940       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"afa2debb-26a6-4ab0-9784-2c276ac06b32", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-093771_5ac3a173-2daf-4b53-9ddc-5e9a1d5f3f56 became leader
	I0924 19:51:43.578958       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-093771_5ac3a173-2daf-4b53-9ddc-5e9a1d5f3f56!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-093771 -n default-k8s-diff-port-093771
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-093771 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-gnlkd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-093771 describe pod metrics-server-6867b74b74-gnlkd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-093771 describe pod metrics-server-6867b74b74-gnlkd: exit status 1 (60.417692ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-gnlkd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-093771 describe pod metrics-server-6867b74b74-gnlkd: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0924 19:52:24.266786   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:52:27.583917   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:54:07.085934   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:54:38.249038   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:54:49.790215   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-311319 -n embed-certs-311319
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-24 20:01:23.071967169 +0000 UTC m=+6092.173978870
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-311319 -n embed-certs-311319
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-311319 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-311319 logs -n 25: (2.014190788s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-038637 sudo cat                              | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:38 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo find                             | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo crio                             | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-038637                                       | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-119609 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | disable-driver-mounts-119609                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:39 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-311319            | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-965745             | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-093771  | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-510301        | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-311319                 | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-965745                  | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-093771       | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:51 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-510301             | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 19:42:46
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 19:42:46.491955   70152 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:42:46.492212   70152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:42:46.492222   70152 out.go:358] Setting ErrFile to fd 2...
	I0924 19:42:46.492228   70152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:42:46.492386   70152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:42:46.492893   70152 out.go:352] Setting JSON to false
	I0924 19:42:46.493799   70152 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5117,"bootTime":1727201849,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 19:42:46.493899   70152 start.go:139] virtualization: kvm guest
	I0924 19:42:46.496073   70152 out.go:177] * [old-k8s-version-510301] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 19:42:46.497447   70152 notify.go:220] Checking for updates...
	I0924 19:42:46.497466   70152 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 19:42:46.498899   70152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 19:42:46.500315   70152 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:42:46.502038   70152 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:42:46.503591   70152 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 19:42:46.505010   70152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 19:42:46.506789   70152 config.go:182] Loaded profile config "old-k8s-version-510301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:42:46.507239   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:42:46.507282   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:42:46.522338   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43977
	I0924 19:42:46.522810   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:42:46.523430   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:42:46.523450   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:42:46.523809   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:42:46.523989   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:42:46.525830   70152 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 19:42:46.527032   70152 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 19:42:46.527327   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:42:46.527361   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:42:46.542427   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37825
	I0924 19:42:46.542782   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:42:46.543220   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:42:46.543237   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:42:46.543562   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:42:46.543731   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:42:46.577253   70152 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 19:42:46.578471   70152 start.go:297] selected driver: kvm2
	I0924 19:42:46.578486   70152 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:42:46.578620   70152 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 19:42:46.579480   70152 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:42:46.579576   70152 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 19:42:46.595023   70152 install.go:137] /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 19:42:46.595376   70152 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:42:46.595401   70152 cni.go:84] Creating CNI manager for ""
	I0924 19:42:46.595427   70152 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:42:46.595456   70152 start.go:340] cluster config:
	{Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:42:46.595544   70152 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:42:46.597600   70152 out.go:177] * Starting "old-k8s-version-510301" primary control-plane node in "old-k8s-version-510301" cluster
	I0924 19:42:49.587099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:42:46.599107   70152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:42:46.599145   70152 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 19:42:46.599157   70152 cache.go:56] Caching tarball of preloaded images
	I0924 19:42:46.599232   70152 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 19:42:46.599246   70152 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 19:42:46.599368   70152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json ...
	I0924 19:42:46.599577   70152 start.go:360] acquireMachinesLock for old-k8s-version-510301: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:42:52.659112   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:42:58.739082   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:01.811107   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:07.891031   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:10.963093   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:17.043125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:20.115055   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:26.195121   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:29.267111   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:35.347125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:38.419109   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:44.499098   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:47.571040   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:53.651128   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:56.723110   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:02.803080   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:05.875118   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:11.955117   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:15.027102   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:21.107097   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:24.179122   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:30.259099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:33.331130   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:39.411086   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:42.483063   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:48.563071   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:51.635087   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:57.715125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:00.787050   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:06.867122   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:09.939097   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:16.019098   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:19.091109   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:25.171099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:28.243075   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:34.323040   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:37.395180   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:43.475096   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:46.547060   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:52.627035   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:55.699131   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:58.703628   69576 start.go:364] duration metric: took 4m21.10107111s to acquireMachinesLock for "no-preload-965745"
	I0924 19:45:58.703677   69576 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:45:58.703682   69576 fix.go:54] fixHost starting: 
	I0924 19:45:58.704078   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:45:58.704123   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:45:58.719888   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32803
	I0924 19:45:58.720250   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:45:58.720694   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:45:58.720714   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:45:58.721073   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:45:58.721262   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:45:58.721419   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:45:58.723062   69576 fix.go:112] recreateIfNeeded on no-preload-965745: state=Stopped err=<nil>
	I0924 19:45:58.723086   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	W0924 19:45:58.723253   69576 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:45:58.725047   69576 out.go:177] * Restarting existing kvm2 VM for "no-preload-965745" ...
	I0924 19:45:58.701057   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:45:58.701123   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:45:58.701448   69408 buildroot.go:166] provisioning hostname "embed-certs-311319"
	I0924 19:45:58.701474   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:45:58.701688   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:45:58.703495   69408 machine.go:96] duration metric: took 4m37.423499364s to provisionDockerMachine
	I0924 19:45:58.703530   69408 fix.go:56] duration metric: took 4m37.446368089s for fixHost
	I0924 19:45:58.703536   69408 start.go:83] releasing machines lock for "embed-certs-311319", held for 4m37.446384972s
	W0924 19:45:58.703575   69408 start.go:714] error starting host: provision: host is not running
	W0924 19:45:58.703648   69408 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0924 19:45:58.703659   69408 start.go:729] Will try again in 5 seconds ...
	I0924 19:45:58.726232   69576 main.go:141] libmachine: (no-preload-965745) Calling .Start
	I0924 19:45:58.726397   69576 main.go:141] libmachine: (no-preload-965745) Ensuring networks are active...
	I0924 19:45:58.727100   69576 main.go:141] libmachine: (no-preload-965745) Ensuring network default is active
	I0924 19:45:58.727392   69576 main.go:141] libmachine: (no-preload-965745) Ensuring network mk-no-preload-965745 is active
	I0924 19:45:58.727758   69576 main.go:141] libmachine: (no-preload-965745) Getting domain xml...
	I0924 19:45:58.728339   69576 main.go:141] libmachine: (no-preload-965745) Creating domain...
	I0924 19:45:59.928391   69576 main.go:141] libmachine: (no-preload-965745) Waiting to get IP...
	I0924 19:45:59.929441   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:45:59.929931   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:45:59.929982   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:45:59.929905   70821 retry.go:31] will retry after 231.188723ms: waiting for machine to come up
	I0924 19:46:00.162502   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.162993   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.163021   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.162944   70821 retry.go:31] will retry after 278.953753ms: waiting for machine to come up
	I0924 19:46:00.443443   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.443868   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.443895   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.443830   70821 retry.go:31] will retry after 307.192984ms: waiting for machine to come up
	I0924 19:46:00.752227   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.752637   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.752666   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.752602   70821 retry.go:31] will retry after 596.967087ms: waiting for machine to come up
	I0924 19:46:01.351461   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:01.351906   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:01.351933   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:01.351859   70821 retry.go:31] will retry after 579.94365ms: waiting for machine to come up
	I0924 19:46:01.933682   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:01.934110   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:01.934141   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:01.934070   70821 retry.go:31] will retry after 862.980289ms: waiting for machine to come up
	I0924 19:46:03.705206   69408 start.go:360] acquireMachinesLock for embed-certs-311319: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:46:02.799129   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:02.799442   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:02.799471   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:02.799394   70821 retry.go:31] will retry after 992.898394ms: waiting for machine to come up
	I0924 19:46:03.794034   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:03.794462   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:03.794518   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:03.794440   70821 retry.go:31] will retry after 917.82796ms: waiting for machine to come up
	I0924 19:46:04.713515   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:04.713888   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:04.713911   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:04.713861   70821 retry.go:31] will retry after 1.30142733s: waiting for machine to come up
	I0924 19:46:06.017327   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:06.017868   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:06.017891   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:06.017835   70821 retry.go:31] will retry after 1.585023602s: waiting for machine to come up
	I0924 19:46:07.603787   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:07.604129   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:07.604148   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:07.604108   70821 retry.go:31] will retry after 2.382871382s: waiting for machine to come up
	I0924 19:46:09.989065   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:09.989530   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:09.989592   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:09.989504   70821 retry.go:31] will retry after 3.009655055s: waiting for machine to come up
	I0924 19:46:17.011094   69904 start.go:364] duration metric: took 3m57.677491969s to acquireMachinesLock for "default-k8s-diff-port-093771"
	I0924 19:46:17.011169   69904 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:17.011180   69904 fix.go:54] fixHost starting: 
	I0924 19:46:17.011578   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:17.011648   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:17.030756   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I0924 19:46:17.031186   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:17.031698   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:46:17.031722   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:17.032028   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:17.032198   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:17.032340   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:46:17.033737   69904 fix.go:112] recreateIfNeeded on default-k8s-diff-port-093771: state=Stopped err=<nil>
	I0924 19:46:17.033761   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	W0924 19:46:17.033912   69904 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:17.036154   69904 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-093771" ...
	I0924 19:46:13.001046   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:13.001487   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:13.001518   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:13.001448   70821 retry.go:31] will retry after 2.789870388s: waiting for machine to come up
	I0924 19:46:15.792496   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.793014   69576 main.go:141] libmachine: (no-preload-965745) Found IP for machine: 192.168.39.134
	I0924 19:46:15.793035   69576 main.go:141] libmachine: (no-preload-965745) Reserving static IP address...
	I0924 19:46:15.793051   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has current primary IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.793564   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "no-preload-965745", mac: "52:54:00:c4:4b:79", ip: "192.168.39.134"} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.793590   69576 main.go:141] libmachine: (no-preload-965745) DBG | skip adding static IP to network mk-no-preload-965745 - found existing host DHCP lease matching {name: "no-preload-965745", mac: "52:54:00:c4:4b:79", ip: "192.168.39.134"}
	I0924 19:46:15.793602   69576 main.go:141] libmachine: (no-preload-965745) Reserved static IP address: 192.168.39.134
	I0924 19:46:15.793631   69576 main.go:141] libmachine: (no-preload-965745) DBG | Getting to WaitForSSH function...
	I0924 19:46:15.793643   69576 main.go:141] libmachine: (no-preload-965745) Waiting for SSH to be available...
	I0924 19:46:15.795732   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.796002   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.796023   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.796169   69576 main.go:141] libmachine: (no-preload-965745) DBG | Using SSH client type: external
	I0924 19:46:15.796196   69576 main.go:141] libmachine: (no-preload-965745) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa (-rw-------)
	I0924 19:46:15.796227   69576 main.go:141] libmachine: (no-preload-965745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:15.796241   69576 main.go:141] libmachine: (no-preload-965745) DBG | About to run SSH command:
	I0924 19:46:15.796247   69576 main.go:141] libmachine: (no-preload-965745) DBG | exit 0
	I0924 19:46:15.922480   69576 main.go:141] libmachine: (no-preload-965745) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:15.922886   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetConfigRaw
	I0924 19:46:15.923532   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:15.925814   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.926152   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.926180   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.926341   69576 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/config.json ...
	I0924 19:46:15.926506   69576 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:15.926523   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:15.926755   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:15.929175   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.929512   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.929539   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.929647   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:15.929805   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:15.929956   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:15.930041   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:15.930184   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:15.930374   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:15.930386   69576 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:16.038990   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:16.039018   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.039241   69576 buildroot.go:166] provisioning hostname "no-preload-965745"
	I0924 19:46:16.039266   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.039459   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.042183   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.042567   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.042603   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.042728   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.042929   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.043085   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.043264   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.043431   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.043611   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.043624   69576 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-965745 && echo "no-preload-965745" | sudo tee /etc/hostname
	I0924 19:46:16.163262   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-965745
	
	I0924 19:46:16.163289   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.165935   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.166256   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.166276   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.166415   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.166602   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.166728   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.166876   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.167005   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.167219   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.167244   69576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-965745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-965745/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-965745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:16.282661   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:16.282689   69576 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:16.282714   69576 buildroot.go:174] setting up certificates
	I0924 19:46:16.282723   69576 provision.go:84] configureAuth start
	I0924 19:46:16.282734   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.283017   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:16.285665   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.286113   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.286140   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.286283   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.288440   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.288750   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.288775   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.288932   69576 provision.go:143] copyHostCerts
	I0924 19:46:16.288984   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:16.288996   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:16.289093   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:16.289206   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:16.289221   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:16.289265   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:16.289341   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:16.289350   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:16.289385   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:16.289451   69576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.no-preload-965745 san=[127.0.0.1 192.168.39.134 localhost minikube no-preload-965745]
	I0924 19:46:16.400236   69576 provision.go:177] copyRemoteCerts
	I0924 19:46:16.400302   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:16.400330   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.402770   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.403069   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.403107   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.403226   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.403415   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.403678   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.403826   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:16.488224   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:16.509856   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 19:46:16.531212   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:16.552758   69576 provision.go:87] duration metric: took 270.023746ms to configureAuth
	I0924 19:46:16.552787   69576 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:16.552980   69576 config.go:182] Loaded profile config "no-preload-965745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:16.553045   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.555463   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.555792   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.555812   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.555992   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.556190   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.556337   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.556447   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.556569   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.556756   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.556774   69576 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:16.777283   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:16.777305   69576 machine.go:96] duration metric: took 850.787273ms to provisionDockerMachine
	I0924 19:46:16.777318   69576 start.go:293] postStartSetup for "no-preload-965745" (driver="kvm2")
	I0924 19:46:16.777330   69576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:16.777348   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:16.777726   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:16.777751   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.780187   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.780591   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.780632   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.780812   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.781015   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.781163   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.781359   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:16.864642   69576 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:16.868296   69576 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:16.868317   69576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:16.868379   69576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:16.868456   69576 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:16.868549   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:16.877019   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:16.898717   69576 start.go:296] duration metric: took 121.386885ms for postStartSetup
	I0924 19:46:16.898752   69576 fix.go:56] duration metric: took 18.195069583s for fixHost
	I0924 19:46:16.898772   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.901284   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.901593   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.901620   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.901773   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.901965   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.902143   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.902278   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.902416   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.902572   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.902580   69576 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:17.010942   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207176.987992125
	
	I0924 19:46:17.010968   69576 fix.go:216] guest clock: 1727207176.987992125
	I0924 19:46:17.010977   69576 fix.go:229] Guest: 2024-09-24 19:46:16.987992125 +0000 UTC Remote: 2024-09-24 19:46:16.898755451 +0000 UTC m=+279.432619611 (delta=89.236674ms)
	I0924 19:46:17.011002   69576 fix.go:200] guest clock delta is within tolerance: 89.236674ms
	I0924 19:46:17.011008   69576 start.go:83] releasing machines lock for "no-preload-965745", held for 18.307345605s
	I0924 19:46:17.011044   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.011314   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:17.014130   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.014475   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.014510   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.014661   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015160   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015331   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015443   69576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:17.015485   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:17.015512   69576 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:17.015536   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:17.018062   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018324   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018392   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.018416   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018531   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:17.018681   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:17.018754   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.018805   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018814   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:17.018956   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:17.019039   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:17.019130   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:17.019295   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:17.019483   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:17.120138   69576 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:17.125567   69576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:17.269403   69576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:17.275170   69576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:17.275229   69576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:17.290350   69576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:17.290374   69576 start.go:495] detecting cgroup driver to use...
	I0924 19:46:17.290437   69576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:17.310059   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:17.323377   69576 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:17.323440   69576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:17.336247   69576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:17.349168   69576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:17.461240   69576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:17.606562   69576 docker.go:233] disabling docker service ...
	I0924 19:46:17.606632   69576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:17.623001   69576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:17.637472   69576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:17.778735   69576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:17.905408   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:17.921465   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:17.938193   69576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:46:17.938265   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.947686   69576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:17.947748   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.957230   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.966507   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.975768   69576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:17.985288   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.995405   69576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:18.011401   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:18.024030   69576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:18.034873   69576 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:18.034939   69576 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:18.047359   69576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:18.057288   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:18.181067   69576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:18.272703   69576 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:18.272779   69576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:18.277272   69576 start.go:563] Will wait 60s for crictl version
	I0924 19:46:18.277338   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.280914   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:18.319509   69576 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:18.319603   69576 ssh_runner.go:195] Run: crio --version
	I0924 19:46:18.349619   69576 ssh_runner.go:195] Run: crio --version
	I0924 19:46:18.376567   69576 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:46:17.037598   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Start
	I0924 19:46:17.037763   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring networks are active...
	I0924 19:46:17.038517   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring network default is active
	I0924 19:46:17.038875   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring network mk-default-k8s-diff-port-093771 is active
	I0924 19:46:17.039247   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Getting domain xml...
	I0924 19:46:17.039971   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Creating domain...
	I0924 19:46:18.369133   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting to get IP...
	I0924 19:46:18.370069   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.370537   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.370589   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.370490   70958 retry.go:31] will retry after 309.496724ms: waiting for machine to come up
	I0924 19:46:18.682355   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.682933   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.682982   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.682901   70958 retry.go:31] will retry after 274.120659ms: waiting for machine to come up
	I0924 19:46:18.958554   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.959017   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.959044   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.958981   70958 retry.go:31] will retry after 301.44935ms: waiting for machine to come up
	I0924 19:46:18.377928   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:18.380767   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:18.381227   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:18.381343   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:18.381519   69576 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:18.385510   69576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:18.398125   69576 kubeadm.go:883] updating cluster {Name:no-preload-965745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:18.398269   69576 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:46:18.398324   69576 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:18.433136   69576 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:46:18.433158   69576 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 19:46:18.433221   69576 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:18.433232   69576 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.433266   69576 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.433288   69576 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.433295   69576 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.433348   69576 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.433369   69576 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0924 19:46:18.433406   69576 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.435096   69576 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.435095   69576 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.435130   69576 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0924 19:46:18.435125   69576 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.435167   69576 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.435282   69576 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.435312   69576 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:18.435355   69576 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.586269   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.594361   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.594399   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.595814   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.600629   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.625054   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.626264   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0924 19:46:18.648420   69576 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0924 19:46:18.648471   69576 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.648519   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.736906   69576 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0924 19:46:18.736967   69576 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.736995   69576 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0924 19:46:18.737033   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.737038   69576 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.736924   69576 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0924 19:46:18.737086   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.737094   69576 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.737129   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.738294   69576 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0924 19:46:18.738322   69576 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.738372   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.759842   69576 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0924 19:46:18.759877   69576 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.759920   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.863913   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.864011   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.863924   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.863940   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.863970   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.863980   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.982915   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.982954   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.983003   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:19.005899   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:19.005922   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:19.005993   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:19.085255   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:19.085357   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:19.085385   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:19.140884   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:19.140951   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:19.141049   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:19.186906   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0924 19:46:19.187032   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.190934   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0924 19:46:19.191034   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:19.219210   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0924 19:46:19.219345   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:19.250400   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0924 19:46:19.250433   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0924 19:46:19.250510   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:19.250510   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0924 19:46:19.250541   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0924 19:46:19.250557   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.250511   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:19.250575   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0924 19:46:19.250589   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:19.250595   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0924 19:46:19.250597   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.263357   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0924 19:46:19.422736   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:21.705978   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.455378333s)
	I0924 19:46:21.706013   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.455386133s)
	I0924 19:46:21.706050   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0924 19:46:21.706075   69576 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:21.706086   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.455478137s)
	I0924 19:46:21.706116   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0924 19:46:21.706023   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0924 19:46:21.706127   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:21.706162   69576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.283401294s)
	I0924 19:46:21.706195   69576 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0924 19:46:21.706223   69576 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:21.706267   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:19.262500   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.263016   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.263065   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:19.262997   70958 retry.go:31] will retry after 463.004617ms: waiting for machine to come up
	I0924 19:46:19.727528   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.728017   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.728039   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:19.727972   70958 retry.go:31] will retry after 463.942506ms: waiting for machine to come up
	I0924 19:46:20.193614   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.194039   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.194066   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:20.193993   70958 retry.go:31] will retry after 595.200456ms: waiting for machine to come up
	I0924 19:46:20.790814   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.791264   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.791290   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:20.791229   70958 retry.go:31] will retry after 862.850861ms: waiting for machine to come up
	I0924 19:46:21.655227   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:21.655702   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:21.655732   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:21.655652   70958 retry.go:31] will retry after 1.436744818s: waiting for machine to come up
	I0924 19:46:23.093891   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:23.094619   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:23.094652   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:23.094545   70958 retry.go:31] will retry after 1.670034049s: waiting for machine to come up
	I0924 19:46:23.573866   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.867718194s)
	I0924 19:46:23.573911   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0924 19:46:23.573942   69576 ssh_runner.go:235] Completed: which crictl: (1.867653076s)
	I0924 19:46:23.574009   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:23.573947   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:23.574079   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:24.924292   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.35018601s)
	I0924 19:46:24.924325   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0924 19:46:24.924325   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.350292754s)
	I0924 19:46:24.924351   69576 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:24.924400   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:24.924400   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:24.765982   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:24.766453   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:24.766486   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:24.766399   70958 retry.go:31] will retry after 2.142103801s: waiting for machine to come up
	I0924 19:46:26.911998   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:26.912395   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:26.912425   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:26.912350   70958 retry.go:31] will retry after 1.90953864s: waiting for machine to come up
	I0924 19:46:28.823807   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:28.824294   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:28.824324   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:28.824242   70958 retry.go:31] will retry after 2.249657554s: waiting for machine to come up
	I0924 19:46:28.202705   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.278273074s)
	I0924 19:46:28.202736   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0924 19:46:28.202759   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:28.202781   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.278300546s)
	I0924 19:46:28.202798   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:28.202862   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:29.870161   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.667334937s)
	I0924 19:46:29.870195   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0924 19:46:29.870161   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.667273921s)
	I0924 19:46:29.870218   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:29.870248   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0924 19:46:29.870269   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:29.870357   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.922800   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.05250542s)
	I0924 19:46:31.922865   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0924 19:46:31.922894   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.052511751s)
	I0924 19:46:31.922928   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0924 19:46:31.922938   69576 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.922996   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.076197   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:31.076624   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:31.076660   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:31.076579   70958 retry.go:31] will retry after 3.538260641s: waiting for machine to come up
	I0924 19:46:35.823566   70152 start.go:364] duration metric: took 3m49.223945366s to acquireMachinesLock for "old-k8s-version-510301"
	I0924 19:46:35.823654   70152 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:35.823666   70152 fix.go:54] fixHost starting: 
	I0924 19:46:35.824101   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:35.824161   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:35.844327   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38055
	I0924 19:46:35.844741   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:35.845377   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:46:35.845402   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:35.845769   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:35.845997   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:35.846186   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetState
	I0924 19:46:35.847728   70152 fix.go:112] recreateIfNeeded on old-k8s-version-510301: state=Stopped err=<nil>
	I0924 19:46:35.847754   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	W0924 19:46:35.847912   70152 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:35.849981   70152 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-510301" ...
	I0924 19:46:35.851388   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .Start
	I0924 19:46:35.851573   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring networks are active...
	I0924 19:46:35.852445   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring network default is active
	I0924 19:46:35.852832   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring network mk-old-k8s-version-510301 is active
	I0924 19:46:35.853342   70152 main.go:141] libmachine: (old-k8s-version-510301) Getting domain xml...
	I0924 19:46:35.854028   70152 main.go:141] libmachine: (old-k8s-version-510301) Creating domain...
	I0924 19:46:34.618473   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.618980   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Found IP for machine: 192.168.50.116
	I0924 19:46:34.619006   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Reserving static IP address...
	I0924 19:46:34.619022   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has current primary IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.619475   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-093771", mac: "52:54:00:21:4a:f5", ip: "192.168.50.116"} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.619520   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Reserved static IP address: 192.168.50.116
	I0924 19:46:34.619540   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | skip adding static IP to network mk-default-k8s-diff-port-093771 - found existing host DHCP lease matching {name: "default-k8s-diff-port-093771", mac: "52:54:00:21:4a:f5", ip: "192.168.50.116"}
	I0924 19:46:34.619559   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Getting to WaitForSSH function...
	I0924 19:46:34.619573   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for SSH to be available...
	I0924 19:46:34.621893   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.622318   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.622346   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.622525   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Using SSH client type: external
	I0924 19:46:34.622553   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa (-rw-------)
	I0924 19:46:34.622584   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:34.622603   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | About to run SSH command:
	I0924 19:46:34.622621   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | exit 0
	I0924 19:46:34.746905   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:34.747246   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetConfigRaw
	I0924 19:46:34.747867   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:34.750507   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.751020   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.751052   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.751327   69904 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/config.json ...
	I0924 19:46:34.751516   69904 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:34.751533   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:34.751773   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.754088   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.754380   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.754400   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.754510   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.754703   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.754988   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.755201   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.755479   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.755714   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.755727   69904 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:34.854791   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:34.854816   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:34.855126   69904 buildroot.go:166] provisioning hostname "default-k8s-diff-port-093771"
	I0924 19:46:34.855157   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:34.855362   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.858116   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.858459   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.858491   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.858639   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.858821   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.859002   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.859124   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.859281   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.859444   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.859458   69904 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-093771 && echo "default-k8s-diff-port-093771" | sudo tee /etc/hostname
	I0924 19:46:34.974247   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-093771
	
	I0924 19:46:34.974285   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.977117   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.977514   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.977544   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.977781   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.978011   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.978184   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.978326   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.978512   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.978736   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.978761   69904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-093771' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-093771/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-093771' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:35.096102   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:35.096132   69904 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:35.096172   69904 buildroot.go:174] setting up certificates
	I0924 19:46:35.096182   69904 provision.go:84] configureAuth start
	I0924 19:46:35.096192   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:35.096501   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:35.099177   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.099529   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.099563   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.099743   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.102392   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.102744   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.102771   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.102941   69904 provision.go:143] copyHostCerts
	I0924 19:46:35.102988   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:35.102996   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:35.103053   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:35.103147   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:35.103155   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:35.103176   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:35.103229   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:35.103237   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:35.103255   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:35.103319   69904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-093771 san=[127.0.0.1 192.168.50.116 default-k8s-diff-port-093771 localhost minikube]
	I0924 19:46:35.213279   69904 provision.go:177] copyRemoteCerts
	I0924 19:46:35.213364   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:35.213396   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.216668   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.217114   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.217150   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.217374   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.217544   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.217759   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.217937   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.300483   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:35.323893   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0924 19:46:35.346838   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:35.368788   69904 provision.go:87] duration metric: took 272.591773ms to configureAuth
	I0924 19:46:35.368819   69904 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:35.369032   69904 config.go:182] Loaded profile config "default-k8s-diff-port-093771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:35.369107   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.372264   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.372571   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.372601   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.372833   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.373033   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.373221   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.373395   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.373595   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:35.373768   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:35.373800   69904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:35.593954   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:35.593983   69904 machine.go:96] duration metric: took 842.454798ms to provisionDockerMachine
	I0924 19:46:35.593998   69904 start.go:293] postStartSetup for "default-k8s-diff-port-093771" (driver="kvm2")
	I0924 19:46:35.594011   69904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:35.594032   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.594381   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:35.594415   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.597073   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.597475   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.597531   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.597668   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.597886   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.598061   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.598225   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.677749   69904 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:35.682185   69904 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:35.682220   69904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:35.682302   69904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:35.682402   69904 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:35.682514   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:35.692308   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:35.717006   69904 start.go:296] duration metric: took 122.993776ms for postStartSetup
	I0924 19:46:35.717045   69904 fix.go:56] duration metric: took 18.705866197s for fixHost
	I0924 19:46:35.717069   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.720111   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.720478   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.720507   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.720702   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.720913   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.721078   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.721208   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.721368   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:35.721547   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:35.721558   69904 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:35.823421   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207195.798332273
	
	I0924 19:46:35.823444   69904 fix.go:216] guest clock: 1727207195.798332273
	I0924 19:46:35.823454   69904 fix.go:229] Guest: 2024-09-24 19:46:35.798332273 +0000 UTC Remote: 2024-09-24 19:46:35.717049796 +0000 UTC m=+256.522802974 (delta=81.282477ms)
	I0924 19:46:35.823478   69904 fix.go:200] guest clock delta is within tolerance: 81.282477ms
	I0924 19:46:35.823484   69904 start.go:83] releasing machines lock for "default-k8s-diff-port-093771", held for 18.812344302s
	I0924 19:46:35.823511   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.823795   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:35.827240   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.827580   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.827612   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.827798   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828501   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828695   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828788   69904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:35.828840   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.828982   69904 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:35.829022   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.831719   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.831888   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832098   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.832125   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832350   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.832419   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.832446   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832518   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.832608   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.832688   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.832761   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.832834   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.832898   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.833000   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.913010   69904 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:35.936917   69904 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:36.082528   69904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:36.090012   69904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:36.090111   69904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:36.109409   69904 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:36.109434   69904 start.go:495] detecting cgroup driver to use...
	I0924 19:46:36.109509   69904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:36.130226   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:36.142975   69904 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:36.143037   69904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:36.159722   69904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:36.174702   69904 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:36.315361   69904 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:36.491190   69904 docker.go:233] disabling docker service ...
	I0924 19:46:36.491259   69904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:36.513843   69904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:36.530208   69904 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:36.658600   69904 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:36.806048   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:36.821825   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:36.841750   69904 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:46:36.841819   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.853349   69904 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:36.853432   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.865214   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.877600   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.889363   69904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:36.901434   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.911763   69904 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.929057   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.939719   69904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:36.949326   69904 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:36.949399   69904 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:36.969647   69904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:36.984522   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:37.132041   69904 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:37.238531   69904 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:37.238638   69904 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:37.243752   69904 start.go:563] Will wait 60s for crictl version
	I0924 19:46:37.243811   69904 ssh_runner.go:195] Run: which crictl
	I0924 19:46:37.247683   69904 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:37.282843   69904 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:37.282932   69904 ssh_runner.go:195] Run: crio --version
	I0924 19:46:37.318022   69904 ssh_runner.go:195] Run: crio --version
	I0924 19:46:37.356586   69904 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:46:32.569181   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0924 19:46:32.569229   69576 cache_images.go:123] Successfully loaded all cached images
	I0924 19:46:32.569236   69576 cache_images.go:92] duration metric: took 14.136066072s to LoadCachedImages
	I0924 19:46:32.569250   69576 kubeadm.go:934] updating node { 192.168.39.134 8443 v1.31.1 crio true true} ...
	I0924 19:46:32.569372   69576 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-965745 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:46:32.569453   69576 ssh_runner.go:195] Run: crio config
	I0924 19:46:32.610207   69576 cni.go:84] Creating CNI manager for ""
	I0924 19:46:32.610236   69576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:32.610247   69576 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:46:32.610284   69576 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.134 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-965745 NodeName:no-preload-965745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:46:32.610407   69576 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-965745"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:46:32.610465   69576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:46:32.620532   69576 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:46:32.620616   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:46:32.629642   69576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 19:46:32.644863   69576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:46:32.659420   69576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0924 19:46:32.674590   69576 ssh_runner.go:195] Run: grep 192.168.39.134	control-plane.minikube.internal$ /etc/hosts
	I0924 19:46:32.677861   69576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:32.688560   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:32.791827   69576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:32.807240   69576 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745 for IP: 192.168.39.134
	I0924 19:46:32.807266   69576 certs.go:194] generating shared ca certs ...
	I0924 19:46:32.807286   69576 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:32.807447   69576 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:46:32.807502   69576 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:46:32.807515   69576 certs.go:256] generating profile certs ...
	I0924 19:46:32.807645   69576 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/client.key
	I0924 19:46:32.807736   69576 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.key.6934b726
	I0924 19:46:32.807799   69576 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.key
	I0924 19:46:32.807950   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:46:32.807997   69576 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:46:32.808011   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:46:32.808045   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:46:32.808076   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:46:32.808111   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:46:32.808168   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:32.809039   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:46:32.866086   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:46:32.892458   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:46:32.925601   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:46:32.956936   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 19:46:32.979570   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 19:46:33.001159   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:46:33.022216   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 19:46:33.044213   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:46:33.065352   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:46:33.086229   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:46:33.107040   69576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:46:33.122285   69576 ssh_runner.go:195] Run: openssl version
	I0924 19:46:33.127664   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:46:33.137277   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.141239   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.141289   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.146498   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:46:33.156352   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:46:33.166235   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.170189   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.170233   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.175345   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:46:33.185095   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:46:33.194846   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.199024   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.199084   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.204244   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:46:33.214142   69576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:46:33.218178   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:46:33.223659   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:46:33.228914   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:46:33.234183   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:46:33.239611   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:46:33.244844   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:46:33.250012   69576 kubeadm.go:392] StartCluster: {Name:no-preload-965745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:46:33.250094   69576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:46:33.250128   69576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:33.282919   69576 cri.go:89] found id: ""
	I0924 19:46:33.282980   69576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:46:33.292578   69576 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:46:33.292605   69576 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:46:33.292665   69576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:46:33.301695   69576 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:46:33.303477   69576 kubeconfig.go:125] found "no-preload-965745" server: "https://192.168.39.134:8443"
	I0924 19:46:33.306052   69576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:46:33.314805   69576 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.134
	I0924 19:46:33.314843   69576 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:46:33.314857   69576 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:46:33.314907   69576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:33.346457   69576 cri.go:89] found id: ""
	I0924 19:46:33.346523   69576 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:46:33.361257   69576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:46:33.370192   69576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:46:33.370209   69576 kubeadm.go:157] found existing configuration files:
	
	I0924 19:46:33.370246   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:46:33.378693   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:46:33.378735   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:46:33.387379   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:46:33.395516   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:46:33.395555   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:46:33.404216   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:46:33.412518   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:46:33.412564   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:46:33.421332   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:46:33.430004   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:46:33.430067   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:46:33.438769   69576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:46:33.447918   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:33.547090   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.162139   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.345688   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.400915   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.479925   69576 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:46:34.480005   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:34.980773   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:35.480568   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:35.515707   69576 api_server.go:72] duration metric: took 1.035779291s to wait for apiserver process to appear ...
	I0924 19:46:35.515736   69576 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:46:35.515759   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:37.357928   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:37.361222   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:37.361720   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:37.361763   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:37.362089   69904 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:37.366395   69904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:37.383334   69904 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-093771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:37.383451   69904 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:46:37.383503   69904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:37.425454   69904 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:46:37.425528   69904 ssh_runner.go:195] Run: which lz4
	I0924 19:46:37.430589   69904 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:46:37.435668   69904 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:46:37.435702   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 19:46:38.688183   69904 crio.go:462] duration metric: took 1.257629121s to copy over tarball
	I0924 19:46:38.688265   69904 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:46:38.577925   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:38.577956   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:38.577971   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:38.617929   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:38.617970   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:39.015942   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:39.024069   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:39.024108   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:39.516830   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:39.522389   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:39.522423   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:40.015905   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:40.024316   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:40.024344   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:40.515871   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:40.524708   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0924 19:46:40.533300   69576 api_server.go:141] control plane version: v1.31.1
	I0924 19:46:40.533330   69576 api_server.go:131] duration metric: took 5.017586868s to wait for apiserver health ...
	I0924 19:46:40.533341   69576 cni.go:84] Creating CNI manager for ""
	I0924 19:46:40.533350   69576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:40.535207   69576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:46:37.184620   70152 main.go:141] libmachine: (old-k8s-version-510301) Waiting to get IP...
	I0924 19:46:37.185660   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.186074   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.186151   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.186052   71118 retry.go:31] will retry after 294.949392ms: waiting for machine to come up
	I0924 19:46:37.482814   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.483327   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.483356   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.483268   71118 retry.go:31] will retry after 344.498534ms: waiting for machine to come up
	I0924 19:46:37.830045   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.830715   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.830748   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.830647   71118 retry.go:31] will retry after 342.025563ms: waiting for machine to come up
	I0924 19:46:38.174408   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:38.176008   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:38.176040   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:38.175906   71118 retry.go:31] will retry after 456.814011ms: waiting for machine to come up
	I0924 19:46:38.634792   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:38.635533   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:38.635566   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:38.635443   71118 retry.go:31] will retry after 582.88697ms: waiting for machine to come up
	I0924 19:46:39.220373   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:39.220869   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:39.220899   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:39.220811   71118 retry.go:31] will retry after 648.981338ms: waiting for machine to come up
	I0924 19:46:39.872016   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:39.872615   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:39.872645   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:39.872571   71118 retry.go:31] will retry after 1.138842254s: waiting for machine to come up
	I0924 19:46:41.012974   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:41.013539   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:41.013575   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:41.013489   71118 retry.go:31] will retry after 996.193977ms: waiting for machine to come up
	I0924 19:46:40.536733   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:46:40.547944   69576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:46:40.577608   69576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:46:40.595845   69576 system_pods.go:59] 8 kube-system pods found
	I0924 19:46:40.595910   69576 system_pods.go:61] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:46:40.595922   69576 system_pods.go:61] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:46:40.595934   69576 system_pods.go:61] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:46:40.595947   69576 system_pods.go:61] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:46:40.595957   69576 system_pods.go:61] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 19:46:40.595967   69576 system_pods.go:61] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:46:40.595980   69576 system_pods.go:61] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:46:40.595986   69576 system_pods.go:61] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:46:40.595995   69576 system_pods.go:74] duration metric: took 18.365618ms to wait for pod list to return data ...
	I0924 19:46:40.596006   69576 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:46:40.599781   69576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:46:40.599809   69576 node_conditions.go:123] node cpu capacity is 2
	I0924 19:46:40.599822   69576 node_conditions.go:105] duration metric: took 3.810089ms to run NodePressure ...
	I0924 19:46:40.599842   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:40.916081   69576 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:46:40.921516   69576 kubeadm.go:739] kubelet initialised
	I0924 19:46:40.921545   69576 kubeadm.go:740] duration metric: took 5.434388ms waiting for restarted kubelet to initialise ...
	I0924 19:46:40.921569   69576 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:40.926954   69576 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.931807   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.931825   69576 pod_ready.go:82] duration metric: took 4.85217ms for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.931833   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.931840   69576 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.936614   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "etcd-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.936636   69576 pod_ready.go:82] duration metric: took 4.788888ms for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.936646   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "etcd-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.936654   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.941669   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-apiserver-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.941684   69576 pod_ready.go:82] duration metric: took 5.022921ms for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.941691   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-apiserver-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.941697   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.981457   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.981487   69576 pod_ready.go:82] duration metric: took 39.779589ms for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.981500   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.981512   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:41.381145   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-proxy-ng8vf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.381172   69576 pod_ready.go:82] duration metric: took 399.651445ms for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:41.381183   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-proxy-ng8vf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.381191   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:41.780780   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-scheduler-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.780802   69576 pod_ready.go:82] duration metric: took 399.60413ms for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:41.780811   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-scheduler-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.780818   69576 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:42.181235   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:42.181264   69576 pod_ready.go:82] duration metric: took 400.43573ms for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:42.181278   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:42.181287   69576 pod_ready.go:39] duration metric: took 1.259692411s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:42.181306   69576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:46:42.192253   69576 ops.go:34] apiserver oom_adj: -16
	I0924 19:46:42.192274   69576 kubeadm.go:597] duration metric: took 8.899661487s to restartPrimaryControlPlane
	I0924 19:46:42.192285   69576 kubeadm.go:394] duration metric: took 8.942279683s to StartCluster
	I0924 19:46:42.192302   69576 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:42.192388   69576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:46:42.194586   69576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:42.194926   69576 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:46:42.195047   69576 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:46:42.195118   69576 addons.go:69] Setting storage-provisioner=true in profile "no-preload-965745"
	I0924 19:46:42.195137   69576 addons.go:234] Setting addon storage-provisioner=true in "no-preload-965745"
	W0924 19:46:42.195145   69576 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:46:42.195150   69576 addons.go:69] Setting default-storageclass=true in profile "no-preload-965745"
	I0924 19:46:42.195167   69576 addons.go:69] Setting metrics-server=true in profile "no-preload-965745"
	I0924 19:46:42.195174   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.195177   69576 config.go:182] Loaded profile config "no-preload-965745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:42.195182   69576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-965745"
	I0924 19:46:42.195185   69576 addons.go:234] Setting addon metrics-server=true in "no-preload-965745"
	W0924 19:46:42.195194   69576 addons.go:243] addon metrics-server should already be in state true
	I0924 19:46:42.195219   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.195593   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195609   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195629   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195643   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.195658   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.195736   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.196723   69576 out.go:177] * Verifying Kubernetes components...
	I0924 19:46:42.198152   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:42.212617   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0924 19:46:42.213165   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.213669   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.213695   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.214078   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.214268   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.216100   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45549
	I0924 19:46:42.216467   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.216915   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.216934   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.217274   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.217317   69576 addons.go:234] Setting addon default-storageclass=true in "no-preload-965745"
	W0924 19:46:42.217329   69576 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:46:42.217357   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.217629   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.217666   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.217870   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.217915   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.236569   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I0924 19:46:42.236995   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.236999   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I0924 19:46:42.237477   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.237606   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.237630   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.237989   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.238081   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.238103   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.238605   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.238645   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.238851   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.239570   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.239624   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.243303   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0924 19:46:42.243749   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.244205   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.244225   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.244541   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.244860   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.246518   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.248349   69576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:42.249690   69576 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:46:42.249706   69576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:46:42.249724   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.256169   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0924 19:46:42.256413   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.256626   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.256648   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.256801   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.256952   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.257080   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.257136   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.257247   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.257656   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.257671   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.257975   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.258190   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.259449   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34329
	I0924 19:46:42.259667   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.260521   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.260996   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.261009   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.261374   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.261457   69576 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:46:42.261544   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.262754   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:46:42.262769   69576 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:46:42.262787   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.263351   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.263661   69576 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:46:42.263677   69576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:46:42.263691   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.266205   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.266653   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.266672   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.266974   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.267122   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.267234   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.267342   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.267589   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.267935   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.267951   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.268213   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.268331   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.268417   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.268562   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.408715   69576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:42.425635   69576 node_ready.go:35] waiting up to 6m0s for node "no-preload-965745" to be "Ready" ...
	I0924 19:46:40.944536   69904 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256242572s)
	I0924 19:46:40.944565   69904 crio.go:469] duration metric: took 2.25635162s to extract the tarball
	I0924 19:46:40.944574   69904 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:46:40.981609   69904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:41.019006   69904 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 19:46:41.019026   69904 cache_images.go:84] Images are preloaded, skipping loading
	I0924 19:46:41.019035   69904 kubeadm.go:934] updating node { 192.168.50.116 8444 v1.31.1 crio true true} ...
	I0924 19:46:41.019146   69904 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-093771 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:46:41.019233   69904 ssh_runner.go:195] Run: crio config
	I0924 19:46:41.064904   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:46:41.064927   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:41.064938   69904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:46:41.064957   69904 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.116 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-093771 NodeName:default-k8s-diff-port-093771 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:46:41.065089   69904 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.116
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-093771"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:46:41.065142   69904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:46:41.075518   69904 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:46:41.075604   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:46:41.084461   69904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0924 19:46:41.099383   69904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:46:41.114093   69904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0924 19:46:41.129287   69904 ssh_runner.go:195] Run: grep 192.168.50.116	control-plane.minikube.internal$ /etc/hosts
	I0924 19:46:41.132690   69904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:41.144620   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:41.258218   69904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:41.279350   69904 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771 for IP: 192.168.50.116
	I0924 19:46:41.279373   69904 certs.go:194] generating shared ca certs ...
	I0924 19:46:41.279393   69904 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:41.279592   69904 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:46:41.279668   69904 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:46:41.279685   69904 certs.go:256] generating profile certs ...
	I0924 19:46:41.279806   69904 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/client.key
	I0924 19:46:41.279905   69904 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.key.ee3880b0
	I0924 19:46:41.279968   69904 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.key
	I0924 19:46:41.280139   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:46:41.280176   69904 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:46:41.280189   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:46:41.280248   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:46:41.280292   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:46:41.280324   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:46:41.280379   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:41.281191   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:46:41.319225   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:46:41.343585   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:46:41.373080   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:46:41.405007   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0924 19:46:41.434543   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 19:46:41.458642   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:46:41.480848   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:46:41.502778   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:46:41.525217   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:46:41.548290   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:46:41.572569   69904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:46:41.591631   69904 ssh_runner.go:195] Run: openssl version
	I0924 19:46:41.598407   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:46:41.611310   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.616372   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.616425   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.621818   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:46:41.631262   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:46:41.641685   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.645781   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.645827   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.651168   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:46:41.664296   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:46:41.677001   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.681609   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.681650   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.686733   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:46:41.696235   69904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:46:41.700431   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:46:41.705979   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:46:41.711363   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:46:41.716911   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:46:41.722137   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:46:41.727363   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:46:41.732646   69904 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-093771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:46:41.732750   69904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:46:41.732791   69904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:41.766796   69904 cri.go:89] found id: ""
	I0924 19:46:41.766883   69904 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:46:41.776244   69904 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:46:41.776268   69904 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:46:41.776316   69904 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:46:41.786769   69904 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:46:41.787665   69904 kubeconfig.go:125] found "default-k8s-diff-port-093771" server: "https://192.168.50.116:8444"
	I0924 19:46:41.789591   69904 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:46:41.798561   69904 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.116
	I0924 19:46:41.798596   69904 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:46:41.798617   69904 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:46:41.798661   69904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:41.839392   69904 cri.go:89] found id: ""
	I0924 19:46:41.839469   69904 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:46:41.854464   69904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:46:41.863006   69904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:46:41.863023   69904 kubeadm.go:157] found existing configuration files:
	
	I0924 19:46:41.863082   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 19:46:41.871086   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:46:41.871138   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:46:41.880003   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 19:46:41.890123   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:46:41.890171   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:46:41.901736   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 19:46:41.909613   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:46:41.909670   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:46:41.921595   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 19:46:41.932589   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:46:41.932654   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:46:41.943735   69904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:46:41.952064   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:42.065934   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:42.948388   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.183687   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.264336   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.353897   69904 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:46:43.353979   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:43.854330   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:42.514864   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:46:42.533161   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:46:42.533181   69576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:46:42.539876   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:46:42.564401   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:46:42.564427   69576 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:46:42.598218   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:46:42.598243   69576 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:46:42.619014   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:46:44.487219   69576 node_ready.go:53] node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:45.026145   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.511239735s)
	I0924 19:46:45.026401   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.026416   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.026281   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.486373933s)
	I0924 19:46:45.026501   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.026514   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030099   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030118   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030151   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030162   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030166   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030171   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.030175   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030179   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030184   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.030192   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030494   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030544   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030562   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030634   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030662   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.041980   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.042007   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.042336   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.042391   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.042424   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.120637   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.501525022s)
	I0924 19:46:45.120699   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.120714   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.121114   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.121173   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.121197   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.121222   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.122653   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.122671   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.122683   69576 addons.go:475] Verifying addon metrics-server=true in "no-preload-965745"
	I0924 19:46:45.124698   69576 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 19:46:42.011562   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:42.011963   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:42.011986   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:42.011932   71118 retry.go:31] will retry after 1.827996528s: waiting for machine to come up
	I0924 19:46:43.841529   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:43.842075   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:43.842106   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:43.842030   71118 retry.go:31] will retry after 2.224896366s: waiting for machine to come up
	I0924 19:46:46.068290   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:46.068761   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:46.068784   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:46.068736   71118 retry.go:31] will retry after 2.630690322s: waiting for machine to come up
	I0924 19:46:45.126030   69576 addons.go:510] duration metric: took 2.930987175s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 19:46:46.930203   69576 node_ready.go:53] node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:44.354690   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:44.854316   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:45.354861   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:45.370596   69904 api_server.go:72] duration metric: took 2.016695722s to wait for apiserver process to appear ...
	I0924 19:46:45.370626   69904 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:46:45.370655   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:45.371182   69904 api_server.go:269] stopped: https://192.168.50.116:8444/healthz: Get "https://192.168.50.116:8444/healthz": dial tcp 192.168.50.116:8444: connect: connection refused
	I0924 19:46:45.870725   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.042928   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:48.042957   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:48.042985   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.054732   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:48.054759   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:48.371230   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.381025   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:48.381058   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:48.871669   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.878407   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:48.878440   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:49.371018   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:49.375917   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 200:
	ok
	I0924 19:46:49.383318   69904 api_server.go:141] control plane version: v1.31.1
	I0924 19:46:49.383352   69904 api_server.go:131] duration metric: took 4.012718503s to wait for apiserver health ...
	I0924 19:46:49.383362   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:46:49.383368   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:49.385326   69904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:46:48.700927   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:48.701338   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:48.701367   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:48.701291   71118 retry.go:31] will retry after 3.546152526s: waiting for machine to come up
	I0924 19:46:48.934204   69576 node_ready.go:49] node "no-preload-965745" has status "Ready":"True"
	I0924 19:46:48.934238   69576 node_ready.go:38] duration metric: took 6.508559983s for node "no-preload-965745" to be "Ready" ...
	I0924 19:46:48.934250   69576 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:48.941949   69576 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:48.947063   69576 pod_ready.go:93] pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:48.947094   69576 pod_ready.go:82] duration metric: took 5.112983ms for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:48.947106   69576 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.953349   69576 pod_ready.go:103] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:53.519204   69408 start.go:364] duration metric: took 49.813943111s to acquireMachinesLock for "embed-certs-311319"
	I0924 19:46:53.519255   69408 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:53.519264   69408 fix.go:54] fixHost starting: 
	I0924 19:46:53.519644   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:53.519688   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:53.536327   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0924 19:46:53.536874   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:53.537424   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:46:53.537449   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:53.537804   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:53.538009   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:46:53.538172   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:46:53.539842   69408 fix.go:112] recreateIfNeeded on embed-certs-311319: state=Stopped err=<nil>
	I0924 19:46:53.539866   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	W0924 19:46:53.540003   69408 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:53.541719   69408 out.go:177] * Restarting existing kvm2 VM for "embed-certs-311319" ...
	I0924 19:46:49.386740   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:46:49.398816   69904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:46:49.416805   69904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:46:49.428112   69904 system_pods.go:59] 8 kube-system pods found
	I0924 19:46:49.428153   69904 system_pods.go:61] "coredns-7c65d6cfc9-h4nm8" [621c3ebb-1eb3-47a4-ba87-68e9caa2f3f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:46:49.428175   69904 system_pods.go:61] "etcd-default-k8s-diff-port-093771" [4251f310-2a54-4473-91ba-0aa57247a8e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:46:49.428196   69904 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-093771" [13840d0f-dca8-4b9e-876f-e664bd2ec6e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:46:49.428210   69904 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-093771" [30bbbd4d-8609-47fd-9a9f-373a5b63d785] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:46:49.428220   69904 system_pods.go:61] "kube-proxy-4gx4g" [de627472-1155-4ce3-b910-15657e93988e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 19:46:49.428232   69904 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-093771" [b1edae56-d98a-4fc8-8a99-c6e27f485c91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:46:49.428244   69904 system_pods.go:61] "metrics-server-6867b74b74-rgcll" [11de5d03-9c99-4536-9cfd-b33fe2e11fae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:46:49.428256   69904 system_pods.go:61] "storage-provisioner" [3c29f75e-1570-42cd-8430-284527878197] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 19:46:49.428269   69904 system_pods.go:74] duration metric: took 11.441258ms to wait for pod list to return data ...
	I0924 19:46:49.428288   69904 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:46:49.432173   69904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:46:49.432198   69904 node_conditions.go:123] node cpu capacity is 2
	I0924 19:46:49.432207   69904 node_conditions.go:105] duration metric: took 3.913746ms to run NodePressure ...
	I0924 19:46:49.432221   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:49.707599   69904 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:46:49.712788   69904 kubeadm.go:739] kubelet initialised
	I0924 19:46:49.712808   69904 kubeadm.go:740] duration metric: took 5.18017ms waiting for restarted kubelet to initialise ...
	I0924 19:46:49.712816   69904 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:49.725245   69904 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.731600   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.731624   69904 pod_ready.go:82] duration metric: took 6.354998ms for pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.731633   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.731639   69904 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.737044   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.737067   69904 pod_ready.go:82] duration metric: took 5.419976ms for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.737083   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.737092   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.742151   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.742170   69904 pod_ready.go:82] duration metric: took 5.067452ms for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.742180   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.742185   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.823203   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.823237   69904 pod_ready.go:82] duration metric: took 81.044673ms for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.823253   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.823262   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4gx4g" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.220171   69904 pod_ready.go:93] pod "kube-proxy-4gx4g" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:50.220207   69904 pod_ready.go:82] duration metric: took 396.929531ms for pod "kube-proxy-4gx4g" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.220219   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:52.227683   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:52.249370   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.249921   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has current primary IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.249953   70152 main.go:141] libmachine: (old-k8s-version-510301) Found IP for machine: 192.168.72.81
	I0924 19:46:52.249967   70152 main.go:141] libmachine: (old-k8s-version-510301) Reserving static IP address...
	I0924 19:46:52.250395   70152 main.go:141] libmachine: (old-k8s-version-510301) Reserved static IP address: 192.168.72.81
	I0924 19:46:52.250438   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "old-k8s-version-510301", mac: "52:54:00:72:11:f0", ip: "192.168.72.81"} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.250453   70152 main.go:141] libmachine: (old-k8s-version-510301) Waiting for SSH to be available...
	I0924 19:46:52.250479   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | skip adding static IP to network mk-old-k8s-version-510301 - found existing host DHCP lease matching {name: "old-k8s-version-510301", mac: "52:54:00:72:11:f0", ip: "192.168.72.81"}
	I0924 19:46:52.250492   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Getting to WaitForSSH function...
	I0924 19:46:52.252807   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.253148   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.253176   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.253278   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using SSH client type: external
	I0924 19:46:52.253300   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa (-rw-------)
	I0924 19:46:52.253332   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:52.253345   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | About to run SSH command:
	I0924 19:46:52.253354   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | exit 0
	I0924 19:46:52.378625   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:52.379067   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetConfigRaw
	I0924 19:46:52.379793   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:52.382222   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.382618   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.382647   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.382925   70152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json ...
	I0924 19:46:52.383148   70152 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:52.383174   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:52.383374   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.385984   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.386434   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.386460   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.386614   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.386788   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.387002   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.387167   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.387396   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.387632   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.387645   70152 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:52.503003   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:52.503033   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.503320   70152 buildroot.go:166] provisioning hostname "old-k8s-version-510301"
	I0924 19:46:52.503344   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.503630   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.506502   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.506817   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.506858   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.507028   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.507216   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.507394   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.507584   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.507792   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.508016   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.508034   70152 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-510301 && echo "old-k8s-version-510301" | sudo tee /etc/hostname
	I0924 19:46:52.634014   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-510301
	
	I0924 19:46:52.634040   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.636807   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.637156   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.637186   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.637331   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.637528   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.637721   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.637866   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.638016   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.638228   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.638252   70152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-510301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-510301/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-510301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:52.754583   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:52.754613   70152 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:52.754645   70152 buildroot.go:174] setting up certificates
	I0924 19:46:52.754653   70152 provision.go:84] configureAuth start
	I0924 19:46:52.754664   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.754975   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:52.757674   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.758024   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.758047   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.758158   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.760405   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.760722   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.760751   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.760869   70152 provision.go:143] copyHostCerts
	I0924 19:46:52.760928   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:52.760942   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:52.761009   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:52.761125   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:52.761141   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:52.761180   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:52.761262   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:52.761274   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:52.761301   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:52.761375   70152 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-510301 san=[127.0.0.1 192.168.72.81 localhost minikube old-k8s-version-510301]
	I0924 19:46:52.906522   70152 provision.go:177] copyRemoteCerts
	I0924 19:46:52.906586   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:52.906606   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.909264   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.909580   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.909622   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.909777   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.909960   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.910206   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.910313   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:52.997129   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:53.020405   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 19:46:53.042194   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:53.063422   70152 provision.go:87] duration metric: took 308.753857ms to configureAuth
	I0924 19:46:53.063448   70152 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:53.063662   70152 config.go:182] Loaded profile config "old-k8s-version-510301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:46:53.063752   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.066435   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.066850   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.066877   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.067076   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.067247   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.067382   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.067546   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.067749   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:53.067935   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:53.067958   70152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:53.288436   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:53.288463   70152 machine.go:96] duration metric: took 905.298763ms to provisionDockerMachine
	I0924 19:46:53.288476   70152 start.go:293] postStartSetup for "old-k8s-version-510301" (driver="kvm2")
	I0924 19:46:53.288486   70152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:53.288513   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.288841   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:53.288869   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.291363   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.291643   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.291660   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.291867   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.292054   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.292210   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.292337   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.372984   70152 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:53.377049   70152 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:53.377072   70152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:53.377158   70152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:53.377250   70152 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:53.377339   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:53.385950   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:53.408609   70152 start.go:296] duration metric: took 120.112789ms for postStartSetup
	I0924 19:46:53.408654   70152 fix.go:56] duration metric: took 17.584988201s for fixHost
	I0924 19:46:53.408677   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.411723   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.412100   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.412124   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.412309   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.412544   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.412752   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.412892   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.413075   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:53.413260   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:53.413272   70152 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:53.519060   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207213.488062061
	
	I0924 19:46:53.519081   70152 fix.go:216] guest clock: 1727207213.488062061
	I0924 19:46:53.519090   70152 fix.go:229] Guest: 2024-09-24 19:46:53.488062061 +0000 UTC Remote: 2024-09-24 19:46:53.408658589 +0000 UTC m=+246.951196346 (delta=79.403472ms)
	I0924 19:46:53.519120   70152 fix.go:200] guest clock delta is within tolerance: 79.403472ms
	I0924 19:46:53.519127   70152 start.go:83] releasing machines lock for "old-k8s-version-510301", held for 17.695500754s
	I0924 19:46:53.519158   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.519439   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:53.522059   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.522454   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.522483   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.522639   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523144   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523344   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523432   70152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:53.523470   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.523577   70152 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:53.523614   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.526336   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.526804   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.526845   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.526874   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.527024   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.527216   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.527354   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.527358   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.527382   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.527484   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.527599   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.527742   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.527925   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.528073   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.625956   70152 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:53.631927   70152 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:53.769800   70152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:53.776028   70152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:53.776076   70152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:53.792442   70152 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:53.792476   70152 start.go:495] detecting cgroup driver to use...
	I0924 19:46:53.792558   70152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:53.813239   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:53.827951   70152 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:53.828011   70152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:53.840962   70152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:53.853498   70152 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:53.957380   70152 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:54.123019   70152 docker.go:233] disabling docker service ...
	I0924 19:46:54.123087   70152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:54.138033   70152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:54.153414   70152 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:54.286761   70152 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:54.411013   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:54.432184   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:54.449924   70152 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 19:46:54.450001   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.459689   70152 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:54.459745   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.469555   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.480875   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.490860   70152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:54.503933   70152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:54.513383   70152 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:54.513444   70152 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:54.527180   70152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:54.539778   70152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:54.676320   70152 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:54.774914   70152 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:54.775027   70152 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:54.780383   70152 start.go:563] Will wait 60s for crictl version
	I0924 19:46:54.780457   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:54.785066   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:54.825711   70152 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:54.825792   70152 ssh_runner.go:195] Run: crio --version
	I0924 19:46:54.861643   70152 ssh_runner.go:195] Run: crio --version
	I0924 19:46:54.905425   70152 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 19:46:53.542904   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Start
	I0924 19:46:53.543092   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring networks are active...
	I0924 19:46:53.543799   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring network default is active
	I0924 19:46:53.544155   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring network mk-embed-certs-311319 is active
	I0924 19:46:53.544586   69408 main.go:141] libmachine: (embed-certs-311319) Getting domain xml...
	I0924 19:46:53.545860   69408 main.go:141] libmachine: (embed-certs-311319) Creating domain...
	I0924 19:46:54.960285   69408 main.go:141] libmachine: (embed-certs-311319) Waiting to get IP...
	I0924 19:46:54.961237   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:54.961738   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:54.961831   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:54.961724   71297 retry.go:31] will retry after 193.067485ms: waiting for machine to come up
	I0924 19:46:55.156270   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:55.156850   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:55.156881   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:55.156806   71297 retry.go:31] will retry after 374.820173ms: waiting for machine to come up
	I0924 19:46:55.533606   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:55.534201   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:55.534235   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:55.534160   71297 retry.go:31] will retry after 469.993304ms: waiting for machine to come up
	I0924 19:46:56.005971   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:56.006513   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:56.006544   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:56.006471   71297 retry.go:31] will retry after 418.910837ms: waiting for machine to come up
	I0924 19:46:54.906585   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:54.909353   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:54.909736   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:54.909766   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:54.909970   70152 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:54.915290   70152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:54.927316   70152 kubeadm.go:883] updating cluster {Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:54.927427   70152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:46:54.927465   70152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:54.971020   70152 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:46:54.971090   70152 ssh_runner.go:195] Run: which lz4
	I0924 19:46:54.975775   70152 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:46:54.979807   70152 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:46:54.979865   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 19:46:56.372682   70152 crio.go:462] duration metric: took 1.396951861s to copy over tarball
	I0924 19:46:56.372750   70152 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:46:53.453495   69576 pod_ready.go:103] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:53.954341   69576 pod_ready.go:93] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.954366   69576 pod_ready.go:82] duration metric: took 5.007252183s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.954375   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.959461   69576 pod_ready.go:93] pod "kube-apiserver-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.959485   69576 pod_ready.go:82] duration metric: took 5.103045ms for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.959498   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.964289   69576 pod_ready.go:93] pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.964316   69576 pod_ready.go:82] duration metric: took 4.809404ms for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.964329   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.968263   69576 pod_ready.go:93] pod "kube-proxy-ng8vf" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.968286   69576 pod_ready.go:82] duration metric: took 3.947497ms for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.968296   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.971899   69576 pod_ready.go:93] pod "kube-scheduler-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.971916   69576 pod_ready.go:82] duration metric: took 3.613023ms for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.971924   69576 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:55.980226   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:54.728787   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:57.226216   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:59.227939   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:56.427214   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:56.427600   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:56.427638   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:56.427551   71297 retry.go:31] will retry after 631.22309ms: waiting for machine to come up
	I0924 19:46:57.059888   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:57.060269   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:57.060299   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:57.060219   71297 retry.go:31] will retry after 833.784855ms: waiting for machine to come up
	I0924 19:46:57.895228   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:57.895693   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:57.895711   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:57.895641   71297 retry.go:31] will retry after 1.12615573s: waiting for machine to come up
	I0924 19:46:59.023342   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:59.023824   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:59.023853   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:59.023770   71297 retry.go:31] will retry after 1.020351559s: waiting for machine to come up
	I0924 19:47:00.045373   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:00.045833   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:00.045860   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:00.045779   71297 retry.go:31] will retry after 1.127245815s: waiting for machine to come up
	I0924 19:46:59.298055   70152 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.925272101s)
	I0924 19:46:59.298082   70152 crio.go:469] duration metric: took 2.925375511s to extract the tarball
	I0924 19:46:59.298091   70152 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:46:59.340896   70152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:59.374335   70152 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:46:59.374358   70152 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 19:46:59.374431   70152 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:59.374463   70152 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.374468   70152 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.374489   70152 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.374514   70152 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.374434   70152 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.374582   70152 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.374624   70152 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 19:46:59.375796   70152 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.375857   70152 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.375925   70152 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.375869   70152 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.376062   70152 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.376154   70152 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:59.376357   70152 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.376419   70152 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 19:46:59.521289   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.525037   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.526549   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.536791   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.545312   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.553847   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 19:46:59.558387   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.611119   70152 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 19:46:59.611167   70152 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.611219   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.659190   70152 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 19:46:59.659234   70152 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.659282   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.660489   70152 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 19:46:59.660522   70152 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 19:46:59.660529   70152 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.660558   70152 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.660591   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.660596   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.686686   70152 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 19:46:59.686728   70152 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.686777   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698274   70152 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 19:46:59.698313   70152 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 19:46:59.698366   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698379   70152 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 19:46:59.698410   70152 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.698449   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698451   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.698462   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.698523   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.698527   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.698573   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.795169   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.795179   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.795201   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.805639   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.817474   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.817485   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.817538   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:46:59.917772   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.921025   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.929651   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.955330   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.955344   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.969966   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:46:59.969966   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:47:00.058059   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 19:47:00.058134   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 19:47:00.058178   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 19:47:00.078489   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 19:47:00.078543   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 19:47:00.091137   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:47:00.091212   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:47:00.132385   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 19:47:00.140154   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 19:47:00.328511   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:47:00.468550   70152 cache_images.go:92] duration metric: took 1.094174976s to LoadCachedImages
	W0924 19:47:00.468674   70152 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0924 19:47:00.468693   70152 kubeadm.go:934] updating node { 192.168.72.81 8443 v1.20.0 crio true true} ...
	I0924 19:47:00.468831   70152 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-510301 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:47:00.468918   70152 ssh_runner.go:195] Run: crio config
	I0924 19:47:00.521799   70152 cni.go:84] Creating CNI manager for ""
	I0924 19:47:00.521826   70152 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:00.521836   70152 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:47:00.521858   70152 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-510301 NodeName:old-k8s-version-510301 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 19:47:00.521992   70152 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-510301"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:47:00.522051   70152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 19:47:00.534799   70152 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:47:00.534888   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:47:00.546863   70152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0924 19:47:00.565623   70152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:47:00.583242   70152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0924 19:47:00.600113   70152 ssh_runner.go:195] Run: grep 192.168.72.81	control-plane.minikube.internal$ /etc/hosts
	I0924 19:47:00.603653   70152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:00.618699   70152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:00.746348   70152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:47:00.767201   70152 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301 for IP: 192.168.72.81
	I0924 19:47:00.767228   70152 certs.go:194] generating shared ca certs ...
	I0924 19:47:00.767246   70152 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:00.767418   70152 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:47:00.767468   70152 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:47:00.767482   70152 certs.go:256] generating profile certs ...
	I0924 19:47:00.767607   70152 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/client.key
	I0924 19:47:00.767675   70152 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key.32de9897
	I0924 19:47:00.767726   70152 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key
	I0924 19:47:00.767866   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:47:00.767903   70152 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:47:00.767916   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:47:00.767950   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:47:00.767980   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:47:00.768013   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:47:00.768064   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:00.768651   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:47:00.819295   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:47:00.858368   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:47:00.903694   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:47:00.930441   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 19:47:00.960346   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 19:47:00.988938   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:47:01.014165   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:47:01.038384   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:47:01.061430   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:47:01.083761   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:47:01.105996   70152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:47:01.121529   70152 ssh_runner.go:195] Run: openssl version
	I0924 19:47:01.127294   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:47:01.139547   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.143897   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.143956   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.149555   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:47:01.159823   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:47:01.170730   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.175500   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.175635   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.181445   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:47:01.194810   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:47:01.205193   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.209256   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.209316   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.214946   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:47:01.225368   70152 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:47:01.229833   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:47:01.235652   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:47:01.241158   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:47:01.248213   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:47:01.255001   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:47:01.262990   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:47:01.270069   70152 kubeadm.go:392] StartCluster: {Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:47:01.270166   70152 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:47:01.270211   70152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:01.310648   70152 cri.go:89] found id: ""
	I0924 19:47:01.310759   70152 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:47:01.321111   70152 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:47:01.321133   70152 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:47:01.321182   70152 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:47:01.330754   70152 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:47:01.331880   70152 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-510301" does not appear in /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:47:01.332435   70152 kubeconfig.go:62] /home/jenkins/minikube-integration/19700-3751/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-510301" cluster setting kubeconfig missing "old-k8s-version-510301" context setting]
	I0924 19:47:01.333336   70152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:01.390049   70152 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:47:01.402246   70152 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.81
	I0924 19:47:01.402281   70152 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:47:01.402295   70152 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:47:01.402346   70152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:01.443778   70152 cri.go:89] found id: ""
	I0924 19:47:01.443851   70152 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:47:01.459836   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:47:01.469392   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:47:01.469414   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:47:01.469454   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:47:01.480329   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:47:01.480402   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:47:01.489799   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:46:58.478282   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:00.478523   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:02.478757   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:01.400039   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:02.984025   69904 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:02.984060   69904 pod_ready.go:82] duration metric: took 12.763830222s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:02.984074   69904 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:01.175244   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:01.175766   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:01.175794   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:01.175728   71297 retry.go:31] will retry after 2.109444702s: waiting for machine to come up
	I0924 19:47:03.288172   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:03.288747   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:03.288815   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:03.288726   71297 retry.go:31] will retry after 1.856538316s: waiting for machine to come up
	I0924 19:47:05.147261   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:05.147676   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:05.147705   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:05.147631   71297 retry.go:31] will retry after 3.46026185s: waiting for machine to come up
	I0924 19:47:01.499967   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:47:01.500023   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:47:01.508842   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:47:01.517564   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:47:01.517620   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:47:01.527204   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:47:01.536656   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:47:01.536718   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:47:01.546282   70152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:47:01.555548   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:01.755130   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.379331   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.601177   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.739476   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.829258   70152 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:47:02.829347   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:03.330254   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:03.830452   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.329738   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.829469   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:05.329754   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:05.830117   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:06.329834   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.978616   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:07.478201   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:04.990988   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:07.489888   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:08.610127   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:08.610582   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:08.610609   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:08.610530   71297 retry.go:31] will retry after 3.91954304s: waiting for machine to come up
	I0924 19:47:06.830043   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:07.330209   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:07.830432   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:08.329603   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:08.829525   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.330455   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.830130   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:10.329475   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:10.829474   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:11.330269   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.977113   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:11.977305   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:09.490038   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:11.490626   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:13.990603   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:12.534647   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.535213   69408 main.go:141] libmachine: (embed-certs-311319) Found IP for machine: 192.168.61.21
	I0924 19:47:12.535249   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has current primary IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.535259   69408 main.go:141] libmachine: (embed-certs-311319) Reserving static IP address...
	I0924 19:47:12.535700   69408 main.go:141] libmachine: (embed-certs-311319) Reserved static IP address: 192.168.61.21
	I0924 19:47:12.535744   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "embed-certs-311319", mac: "52:54:00:2d:97:73", ip: "192.168.61.21"} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.535759   69408 main.go:141] libmachine: (embed-certs-311319) Waiting for SSH to be available...
	I0924 19:47:12.535820   69408 main.go:141] libmachine: (embed-certs-311319) DBG | skip adding static IP to network mk-embed-certs-311319 - found existing host DHCP lease matching {name: "embed-certs-311319", mac: "52:54:00:2d:97:73", ip: "192.168.61.21"}
	I0924 19:47:12.535851   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Getting to WaitForSSH function...
	I0924 19:47:12.538011   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.538313   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.538336   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.538473   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Using SSH client type: external
	I0924 19:47:12.538500   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa (-rw-------)
	I0924 19:47:12.538538   69408 main.go:141] libmachine: (embed-certs-311319) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:47:12.538558   69408 main.go:141] libmachine: (embed-certs-311319) DBG | About to run SSH command:
	I0924 19:47:12.538634   69408 main.go:141] libmachine: (embed-certs-311319) DBG | exit 0
	I0924 19:47:12.662787   69408 main.go:141] libmachine: (embed-certs-311319) DBG | SSH cmd err, output: <nil>: 
	I0924 19:47:12.663130   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetConfigRaw
	I0924 19:47:12.663829   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:12.666266   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.666707   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.666734   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.666985   69408 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/config.json ...
	I0924 19:47:12.667187   69408 machine.go:93] provisionDockerMachine start ...
	I0924 19:47:12.667205   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:12.667397   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.669695   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.670024   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.670056   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.670152   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.670297   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.670460   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.670624   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.670793   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.671018   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.671033   69408 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:47:12.766763   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:47:12.766797   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.767074   69408 buildroot.go:166] provisioning hostname "embed-certs-311319"
	I0924 19:47:12.767103   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.767285   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.770003   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.770519   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.770538   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.770705   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.770934   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.771119   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.771237   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.771408   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.771554   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.771565   69408 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-311319 && echo "embed-certs-311319" | sudo tee /etc/hostname
	I0924 19:47:12.879608   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-311319
	
	I0924 19:47:12.879636   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.882136   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.882424   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.882467   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.882663   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.882866   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.883075   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.883235   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.883416   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.883583   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.883599   69408 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-311319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-311319/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-311319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:47:12.987554   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:47:12.987586   69408 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:47:12.987608   69408 buildroot.go:174] setting up certificates
	I0924 19:47:12.987618   69408 provision.go:84] configureAuth start
	I0924 19:47:12.987630   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.987918   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:12.990946   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.991378   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.991399   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.991554   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.993829   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.994193   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.994222   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.994349   69408 provision.go:143] copyHostCerts
	I0924 19:47:12.994410   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:47:12.994420   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:47:12.994478   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:47:12.994576   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:47:12.994586   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:47:12.994609   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:47:12.994663   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:47:12.994670   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:47:12.994689   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:47:12.994734   69408 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.embed-certs-311319 san=[127.0.0.1 192.168.61.21 embed-certs-311319 localhost minikube]
	I0924 19:47:13.255351   69408 provision.go:177] copyRemoteCerts
	I0924 19:47:13.255425   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:47:13.255452   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.257888   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.258200   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.258229   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.258359   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.258567   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.258746   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.258895   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.337835   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:47:13.360866   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 19:47:13.382703   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 19:47:13.404887   69408 provision.go:87] duration metric: took 417.256101ms to configureAuth
	I0924 19:47:13.404918   69408 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:47:13.405088   69408 config.go:182] Loaded profile config "embed-certs-311319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:47:13.405156   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.407711   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.408005   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.408024   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.408215   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.408408   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.408558   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.408660   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.408798   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:13.408960   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:13.408975   69408 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:47:13.623776   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:47:13.623798   69408 machine.go:96] duration metric: took 956.599003ms to provisionDockerMachine
	I0924 19:47:13.623809   69408 start.go:293] postStartSetup for "embed-certs-311319" (driver="kvm2")
	I0924 19:47:13.623818   69408 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:47:13.623833   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.624139   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:47:13.624168   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.627101   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.627443   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.627463   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.627613   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.627790   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.627941   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.628087   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.705595   69408 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:47:13.709401   69408 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:47:13.709432   69408 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:47:13.709507   69408 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:47:13.709597   69408 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:47:13.709717   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:47:13.718508   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:13.741537   69408 start.go:296] duration metric: took 117.71568ms for postStartSetup
	I0924 19:47:13.741586   69408 fix.go:56] duration metric: took 20.222309525s for fixHost
	I0924 19:47:13.741609   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.743935   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.744298   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.744319   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.744478   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.744665   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.744833   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.744950   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.745099   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:13.745299   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:13.745310   69408 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:47:13.847189   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207233.821269327
	
	I0924 19:47:13.847206   69408 fix.go:216] guest clock: 1727207233.821269327
	I0924 19:47:13.847213   69408 fix.go:229] Guest: 2024-09-24 19:47:13.821269327 +0000 UTC Remote: 2024-09-24 19:47:13.741591139 +0000 UTC m=+352.627485562 (delta=79.678188ms)
	I0924 19:47:13.847230   69408 fix.go:200] guest clock delta is within tolerance: 79.678188ms
	I0924 19:47:13.847236   69408 start.go:83] releasing machines lock for "embed-certs-311319", held for 20.328002727s
	I0924 19:47:13.847252   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.847550   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:13.850207   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.850597   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.850624   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.850777   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851225   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851382   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851459   69408 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:47:13.851520   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.851583   69408 ssh_runner.go:195] Run: cat /version.json
	I0924 19:47:13.851606   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.854077   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854214   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854354   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.854378   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854508   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.854615   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.854646   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854666   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.854852   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.854855   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.855020   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.855030   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.855168   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.855279   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.927108   69408 ssh_runner.go:195] Run: systemctl --version
	I0924 19:47:13.948600   69408 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:47:14.091427   69408 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:47:14.097911   69408 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:47:14.097970   69408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:47:14.113345   69408 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:47:14.113367   69408 start.go:495] detecting cgroup driver to use...
	I0924 19:47:14.113418   69408 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:47:14.129953   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:47:14.143732   69408 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:47:14.143792   69408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:47:14.156986   69408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:47:14.170235   69408 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:47:14.280973   69408 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:47:14.431584   69408 docker.go:233] disabling docker service ...
	I0924 19:47:14.431652   69408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:47:14.447042   69408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:47:14.458811   69408 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:47:14.571325   69408 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:47:14.685951   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:47:14.698947   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:47:14.716153   69408 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:47:14.716210   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.725659   69408 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:47:14.725711   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.734814   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.744087   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.753666   69408 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:47:14.763166   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.772502   69408 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.787890   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.797483   69408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:47:14.805769   69408 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:47:14.805822   69408 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:47:14.817290   69408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:47:14.827023   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:14.954141   69408 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:47:15.033256   69408 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:47:15.033336   69408 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:47:15.038070   69408 start.go:563] Will wait 60s for crictl version
	I0924 19:47:15.038118   69408 ssh_runner.go:195] Run: which crictl
	I0924 19:47:15.041588   69408 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:47:15.081812   69408 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:47:15.081922   69408 ssh_runner.go:195] Run: crio --version
	I0924 19:47:15.108570   69408 ssh_runner.go:195] Run: crio --version
	I0924 19:47:15.137432   69408 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:47:15.138786   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:15.141328   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:15.141693   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:15.141723   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:15.141867   69408 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0924 19:47:15.145512   69408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:15.156995   69408 kubeadm.go:883] updating cluster {Name:embed-certs-311319 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:47:15.157095   69408 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:47:15.157142   69408 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:47:15.189861   69408 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:47:15.189919   69408 ssh_runner.go:195] Run: which lz4
	I0924 19:47:15.193364   69408 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:47:15.196961   69408 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:47:15.196986   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 19:47:11.830448   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:12.330373   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:12.830050   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.329571   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.829489   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:14.329728   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:14.829674   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:15.329673   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:15.829570   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:16.330102   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.978164   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:15.978363   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:15.990970   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:18.491272   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:16.371583   69408 crio.go:462] duration metric: took 1.178253814s to copy over tarball
	I0924 19:47:16.371663   69408 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:47:18.358246   69408 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.986557839s)
	I0924 19:47:18.358276   69408 crio.go:469] duration metric: took 1.986666343s to extract the tarball
	I0924 19:47:18.358285   69408 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:47:18.393855   69408 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:47:18.442985   69408 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 19:47:18.443011   69408 cache_images.go:84] Images are preloaded, skipping loading
	I0924 19:47:18.443020   69408 kubeadm.go:934] updating node { 192.168.61.21 8443 v1.31.1 crio true true} ...
	I0924 19:47:18.443144   69408 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-311319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:47:18.443225   69408 ssh_runner.go:195] Run: crio config
	I0924 19:47:18.495010   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:47:18.495034   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:18.495045   69408 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:47:18.495071   69408 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.21 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-311319 NodeName:embed-certs-311319 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:47:18.495201   69408 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-311319"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:47:18.495259   69408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:47:18.504758   69408 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:47:18.504837   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:47:18.513817   69408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 19:47:18.529890   69408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:47:18.545915   69408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0924 19:47:18.561627   69408 ssh_runner.go:195] Run: grep 192.168.61.21	control-plane.minikube.internal$ /etc/hosts
	I0924 19:47:18.565041   69408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:18.576059   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:18.686482   69408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:47:18.703044   69408 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319 for IP: 192.168.61.21
	I0924 19:47:18.703074   69408 certs.go:194] generating shared ca certs ...
	I0924 19:47:18.703095   69408 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:18.703278   69408 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:47:18.703317   69408 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:47:18.703327   69408 certs.go:256] generating profile certs ...
	I0924 19:47:18.703417   69408 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/client.key
	I0924 19:47:18.703477   69408 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.key.8f14491f
	I0924 19:47:18.703510   69408 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.key
	I0924 19:47:18.703649   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:47:18.703703   69408 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:47:18.703715   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:47:18.703740   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:47:18.703771   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:47:18.703803   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:47:18.703843   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:18.704668   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:47:18.731187   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:47:18.762416   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:47:18.793841   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:47:18.822091   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0924 19:47:18.854506   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 19:47:18.880416   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:47:18.903863   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 19:47:18.926078   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:47:18.947455   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:47:18.968237   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:47:18.990346   69408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:47:19.006286   69408 ssh_runner.go:195] Run: openssl version
	I0924 19:47:19.011968   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:47:19.021631   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.025859   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.025914   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.030999   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:47:19.041265   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:47:19.050994   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.054763   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.054810   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.059873   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:47:19.069694   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:47:19.079194   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.083185   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.083236   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.088369   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:47:19.098719   69408 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:47:19.102935   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:47:19.108364   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:47:19.113724   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:47:19.119556   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:47:19.125014   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:47:19.130466   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:47:19.135718   69408 kubeadm.go:392] StartCluster: {Name:embed-certs-311319 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:47:19.135786   69408 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:47:19.135826   69408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:19.171585   69408 cri.go:89] found id: ""
	I0924 19:47:19.171664   69408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:47:19.181296   69408 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:47:19.181315   69408 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:47:19.181363   69408 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:47:19.191113   69408 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:47:19.192148   69408 kubeconfig.go:125] found "embed-certs-311319" server: "https://192.168.61.21:8443"
	I0924 19:47:19.194115   69408 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:47:19.203274   69408 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.21
	I0924 19:47:19.203308   69408 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:47:19.203319   69408 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:47:19.203372   69408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:19.249594   69408 cri.go:89] found id: ""
	I0924 19:47:19.249678   69408 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:47:19.268296   69408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:47:19.277151   69408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:47:19.277169   69408 kubeadm.go:157] found existing configuration files:
	
	I0924 19:47:19.277206   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:47:19.285488   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:47:19.285550   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:47:19.294995   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:47:19.303613   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:47:19.303669   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:47:19.312919   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:47:19.321717   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:47:19.321778   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:47:19.330321   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:47:19.342441   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:47:19.342497   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:47:19.352505   69408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:47:19.361457   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:19.463310   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.242073   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.431443   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.500079   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.575802   69408 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:47:20.575904   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:21.076353   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:16.829867   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.329440   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.830132   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:18.329512   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:18.829524   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:19.329716   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:19.829496   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:20.329702   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:20.830155   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:21.330292   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.979442   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:20.478202   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:22.478336   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:20.491568   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:22.991057   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:21.576940   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.076696   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.576235   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.594920   69408 api_server.go:72] duration metric: took 2.019101558s to wait for apiserver process to appear ...
	I0924 19:47:22.594944   69408 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:47:22.594965   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:22.595379   69408 api_server.go:269] stopped: https://192.168.61.21:8443/healthz: Get "https://192.168.61.21:8443/healthz": dial tcp 192.168.61.21:8443: connect: connection refused
	I0924 19:47:23.095005   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.467947   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:47:25.467974   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:47:25.467988   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.515819   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:47:25.515851   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:47:25.596001   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.602276   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:25.602314   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:26.095918   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:26.100666   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:26.100698   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:21.829987   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.329630   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.830041   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:23.330430   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:23.829696   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:24.329494   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:24.830212   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:25.330402   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:25.829827   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:26.329541   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:26.595784   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:26.601821   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:26.601861   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:27.095137   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:27.099164   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 200:
	ok
	I0924 19:47:27.106625   69408 api_server.go:141] control plane version: v1.31.1
	I0924 19:47:27.106652   69408 api_server.go:131] duration metric: took 4.511701512s to wait for apiserver health ...
	I0924 19:47:27.106661   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:47:27.106668   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:27.108430   69408 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:47:24.479088   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:26.978509   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:25.490325   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:27.990308   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:27.109830   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:47:27.119442   69408 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:47:27.139119   69408 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:47:27.150029   69408 system_pods.go:59] 8 kube-system pods found
	I0924 19:47:27.150060   69408 system_pods.go:61] "coredns-7c65d6cfc9-wwzps" [5d53dda1-bd41-40f4-8e01-e3808a6e17e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:47:27.150067   69408 system_pods.go:61] "etcd-embed-certs-311319" [899d3105-b565-4c9c-8b8e-fa524ba8bee8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:47:27.150076   69408 system_pods.go:61] "kube-apiserver-embed-certs-311319" [45909a95-dafd-436a-b1c9-4b16a7cb6ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:47:27.150083   69408 system_pods.go:61] "kube-controller-manager-embed-certs-311319" [e122c12d-8ad6-472d-9339-a9751a6108a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:47:27.150089   69408 system_pods.go:61] "kube-proxy-qk749" [ae8c6989-5de4-41bd-9098-1924532b7ff8] Running
	I0924 19:47:27.150094   69408 system_pods.go:61] "kube-scheduler-embed-certs-311319" [2f7427ff-479c-4f36-b27f-cfbf76e26201] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:47:27.150103   69408 system_pods.go:61] "metrics-server-6867b74b74-jfrhm" [b0e8ee4e-c2c6-4379-85ca-805cd3ce6371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:47:27.150107   69408 system_pods.go:61] "storage-provisioner" [b61b6e53-23ad-4cee-8eaa-8195dc6e67b8] Running
	I0924 19:47:27.150115   69408 system_pods.go:74] duration metric: took 10.980516ms to wait for pod list to return data ...
	I0924 19:47:27.150123   69408 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:47:27.154040   69408 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:47:27.154061   69408 node_conditions.go:123] node cpu capacity is 2
	I0924 19:47:27.154070   69408 node_conditions.go:105] duration metric: took 3.94208ms to run NodePressure ...
	I0924 19:47:27.154083   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:27.413841   69408 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:47:27.419186   69408 kubeadm.go:739] kubelet initialised
	I0924 19:47:27.419208   69408 kubeadm.go:740] duration metric: took 5.345194ms waiting for restarted kubelet to initialise ...
	I0924 19:47:27.419217   69408 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:47:27.424725   69408 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.429510   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.429529   69408 pod_ready.go:82] duration metric: took 4.780829ms for pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.429537   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.429542   69408 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.434176   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "etcd-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.434200   69408 pod_ready.go:82] duration metric: took 4.647781ms for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.434211   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "etcd-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.434218   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.438323   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.438352   69408 pod_ready.go:82] duration metric: took 4.121619ms for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.438365   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.438377   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.543006   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.543032   69408 pod_ready.go:82] duration metric: took 104.641326ms for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.543046   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.543053   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qk749" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.942331   69408 pod_ready.go:93] pod "kube-proxy-qk749" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:27.942351   69408 pod_ready.go:82] duration metric: took 399.288777ms for pod "kube-proxy-qk749" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.942360   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:29.955819   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:26.830122   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:27.329632   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:27.829858   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:28.329762   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:28.829476   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.330221   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.829642   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:30.329491   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:30.830098   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:31.329499   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.479174   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:31.979161   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:30.490043   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:32.490237   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:32.447718   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:34.948011   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:35.948500   69408 pod_ready.go:93] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:35.948525   69408 pod_ready.go:82] duration metric: took 8.006158098s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:35.948534   69408 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:31.830201   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:32.330017   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:32.829654   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:33.329718   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:33.830007   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.329683   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.829441   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:35.329848   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:35.829899   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:36.330437   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.478344   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.979370   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:34.490525   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.493495   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:38.990185   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:37.955025   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:39.958725   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.830372   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:37.330124   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:37.829745   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:38.329476   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:38.830138   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.329657   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.829850   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:40.330083   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:40.829903   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:41.329650   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.478317   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:41.978220   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:40.990288   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:42.990812   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:42.455130   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:44.954001   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:41.829413   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:42.329658   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:42.829718   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:43.330413   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:43.830374   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.329633   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.829479   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:45.330059   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:45.829818   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:46.330216   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.478335   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.977745   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:45.489604   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:47.490196   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.954193   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:48.955025   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.830337   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:47.330269   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:47.829573   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:48.329440   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:48.829923   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.329742   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.829771   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:50.329793   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:50.829379   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:51.329385   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.477310   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.977800   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:49.990388   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:52.490087   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.453967   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:53.454464   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:55.454863   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.829989   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:52.329456   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:52.830395   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:53.330348   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:53.829385   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.329667   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.830290   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:55.330430   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:55.829909   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:56.330041   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.477481   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.978407   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:54.490209   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.989867   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:58.990813   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:57.954303   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:00.454466   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.829842   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:57.329904   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:57.829402   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:58.329848   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:58.830403   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.330062   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.829904   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:00.329651   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:00.829451   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:01.330427   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.479270   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.978099   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.490292   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:03.490598   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:02.955021   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:05.455302   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.830104   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:02.330085   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:02.830241   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:02.830313   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:02.863389   70152 cri.go:89] found id: ""
	I0924 19:48:02.863421   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.863432   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:02.863440   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:02.863501   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:02.903587   70152 cri.go:89] found id: ""
	I0924 19:48:02.903615   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.903627   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:02.903634   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:02.903691   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:02.936090   70152 cri.go:89] found id: ""
	I0924 19:48:02.936117   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.936132   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:02.936138   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:02.936197   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:02.970010   70152 cri.go:89] found id: ""
	I0924 19:48:02.970034   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.970042   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:02.970047   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:02.970094   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:03.005123   70152 cri.go:89] found id: ""
	I0924 19:48:03.005146   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.005156   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:03.005164   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:03.005224   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:03.037142   70152 cri.go:89] found id: ""
	I0924 19:48:03.037185   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.037214   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:03.037223   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:03.037289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:03.071574   70152 cri.go:89] found id: ""
	I0924 19:48:03.071605   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.071616   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:03.071644   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:03.071710   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:03.101682   70152 cri.go:89] found id: ""
	I0924 19:48:03.101710   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.101718   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:03.101727   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:03.101737   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:03.145955   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:03.145982   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:03.194495   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:03.194531   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:03.207309   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:03.207344   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:03.318709   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:03.318736   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:03.318751   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:05.897472   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:05.910569   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:05.910633   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:05.972008   70152 cri.go:89] found id: ""
	I0924 19:48:05.972047   70152 logs.go:276] 0 containers: []
	W0924 19:48:05.972059   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:05.972066   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:05.972128   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:06.021928   70152 cri.go:89] found id: ""
	I0924 19:48:06.021954   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.021961   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:06.021967   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:06.022018   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:06.054871   70152 cri.go:89] found id: ""
	I0924 19:48:06.054910   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.054919   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:06.054924   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:06.054979   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:06.087218   70152 cri.go:89] found id: ""
	I0924 19:48:06.087242   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.087253   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:06.087261   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:06.087312   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:06.120137   70152 cri.go:89] found id: ""
	I0924 19:48:06.120162   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.120170   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:06.120176   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:06.120222   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:06.150804   70152 cri.go:89] found id: ""
	I0924 19:48:06.150842   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.150854   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:06.150862   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:06.150911   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:06.189829   70152 cri.go:89] found id: ""
	I0924 19:48:06.189856   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.189864   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:06.189870   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:06.189920   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:06.224712   70152 cri.go:89] found id: ""
	I0924 19:48:06.224739   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.224747   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:06.224755   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:06.224769   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:06.290644   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:06.290669   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:06.290681   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:06.369393   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:06.369427   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:06.404570   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:06.404601   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:06.456259   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:06.456288   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:04.478140   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:06.478544   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:05.991344   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:08.489768   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:07.954351   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:10.453427   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:08.969378   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:08.982058   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:08.982129   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:09.015453   70152 cri.go:89] found id: ""
	I0924 19:48:09.015475   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.015484   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:09.015489   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:09.015535   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:09.046308   70152 cri.go:89] found id: ""
	I0924 19:48:09.046332   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.046343   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:09.046350   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:09.046412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:09.077263   70152 cri.go:89] found id: ""
	I0924 19:48:09.077296   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.077308   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:09.077315   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:09.077373   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:09.109224   70152 cri.go:89] found id: ""
	I0924 19:48:09.109255   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.109267   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:09.109274   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:09.109342   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:09.144346   70152 cri.go:89] found id: ""
	I0924 19:48:09.144370   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.144378   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:09.144383   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:09.144434   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:09.175798   70152 cri.go:89] found id: ""
	I0924 19:48:09.175827   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.175843   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:09.175854   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:09.175923   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:09.211912   70152 cri.go:89] found id: ""
	I0924 19:48:09.211935   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.211942   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:09.211948   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:09.211996   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:09.242068   70152 cri.go:89] found id: ""
	I0924 19:48:09.242099   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.242110   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:09.242121   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:09.242134   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:09.306677   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:09.306696   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:09.306707   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:09.384544   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:09.384598   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:09.419555   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:09.419583   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:09.470699   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:09.470731   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:08.977847   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:11.477629   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:10.491124   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:12.990300   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:12.455219   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:14.455548   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:11.984355   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:11.997823   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:11.997879   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:12.029976   70152 cri.go:89] found id: ""
	I0924 19:48:12.030009   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.030021   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:12.030041   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:12.030187   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:12.061131   70152 cri.go:89] found id: ""
	I0924 19:48:12.061157   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.061165   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:12.061170   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:12.061223   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:12.091952   70152 cri.go:89] found id: ""
	I0924 19:48:12.091978   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.091986   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:12.091992   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:12.092039   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:12.127561   70152 cri.go:89] found id: ""
	I0924 19:48:12.127586   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.127597   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:12.127604   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:12.127688   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:12.157342   70152 cri.go:89] found id: ""
	I0924 19:48:12.157363   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.157371   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:12.157377   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:12.157449   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:12.188059   70152 cri.go:89] found id: ""
	I0924 19:48:12.188090   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.188101   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:12.188109   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:12.188163   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:12.222357   70152 cri.go:89] found id: ""
	I0924 19:48:12.222380   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.222388   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:12.222398   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:12.222456   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:12.252715   70152 cri.go:89] found id: ""
	I0924 19:48:12.252736   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.252743   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:12.252751   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:12.252761   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:12.302913   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:12.302943   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:12.315812   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:12.315840   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:12.392300   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:12.392322   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:12.392333   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:12.475042   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:12.475081   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:15.013852   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:15.026515   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:15.026586   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:15.057967   70152 cri.go:89] found id: ""
	I0924 19:48:15.057993   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.058001   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:15.058008   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:15.058063   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:15.092822   70152 cri.go:89] found id: ""
	I0924 19:48:15.092852   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.092860   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:15.092866   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:15.092914   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:15.127847   70152 cri.go:89] found id: ""
	I0924 19:48:15.127875   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.127884   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:15.127889   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:15.127941   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:15.159941   70152 cri.go:89] found id: ""
	I0924 19:48:15.159967   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.159975   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:15.159981   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:15.160035   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:15.192384   70152 cri.go:89] found id: ""
	I0924 19:48:15.192411   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.192422   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:15.192428   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:15.192481   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:15.225446   70152 cri.go:89] found id: ""
	I0924 19:48:15.225472   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.225482   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:15.225488   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:15.225546   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:15.257292   70152 cri.go:89] found id: ""
	I0924 19:48:15.257312   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.257320   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:15.257326   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:15.257377   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:15.288039   70152 cri.go:89] found id: ""
	I0924 19:48:15.288073   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.288085   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:15.288096   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:15.288110   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:15.300593   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:15.300619   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:15.365453   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:15.365482   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:15.365497   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:15.442405   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:15.442440   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:15.481003   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:15.481033   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:13.978638   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.477631   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:14.990464   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.991280   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.954405   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:18.955055   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:18.031802   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:18.044013   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:18.044070   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:18.076333   70152 cri.go:89] found id: ""
	I0924 19:48:18.076357   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.076365   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:18.076371   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:18.076421   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:18.110333   70152 cri.go:89] found id: ""
	I0924 19:48:18.110367   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.110379   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:18.110386   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:18.110457   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:18.142730   70152 cri.go:89] found id: ""
	I0924 19:48:18.142755   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.142763   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:18.142769   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:18.142848   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:18.174527   70152 cri.go:89] found id: ""
	I0924 19:48:18.174551   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.174561   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:18.174568   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:18.174623   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:18.213873   70152 cri.go:89] found id: ""
	I0924 19:48:18.213904   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.213916   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:18.213923   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:18.214019   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:18.247037   70152 cri.go:89] found id: ""
	I0924 19:48:18.247069   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.247079   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:18.247087   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:18.247167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:18.278275   70152 cri.go:89] found id: ""
	I0924 19:48:18.278302   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.278313   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:18.278319   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:18.278377   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:18.311651   70152 cri.go:89] found id: ""
	I0924 19:48:18.311679   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.311690   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:18.311702   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:18.311714   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:18.365113   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:18.365144   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:18.378675   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:18.378702   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:18.450306   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:18.450339   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:18.450353   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:18.529373   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:18.529420   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:21.065169   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:21.077517   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:21.077579   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:21.112639   70152 cri.go:89] found id: ""
	I0924 19:48:21.112663   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.112671   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:21.112677   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:21.112729   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:21.144587   70152 cri.go:89] found id: ""
	I0924 19:48:21.144608   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.144616   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:21.144625   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:21.144675   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:21.175675   70152 cri.go:89] found id: ""
	I0924 19:48:21.175697   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.175705   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:21.175710   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:21.175760   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:21.207022   70152 cri.go:89] found id: ""
	I0924 19:48:21.207044   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.207053   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:21.207058   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:21.207108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:21.238075   70152 cri.go:89] found id: ""
	I0924 19:48:21.238106   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.238118   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:21.238125   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:21.238188   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:21.269998   70152 cri.go:89] found id: ""
	I0924 19:48:21.270030   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.270040   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:21.270048   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:21.270108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:21.301274   70152 cri.go:89] found id: ""
	I0924 19:48:21.301303   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.301315   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:21.301323   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:21.301389   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:21.332082   70152 cri.go:89] found id: ""
	I0924 19:48:21.332107   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.332115   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:21.332123   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:21.332133   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:21.383713   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:21.383759   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:21.396926   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:21.396950   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:21.465280   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:21.465306   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:21.465321   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:18.477865   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:20.978484   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:19.491021   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.993922   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.454663   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:23.455041   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:25.954094   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.544724   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:21.544760   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:24.083632   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:24.095853   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:24.095909   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:24.126692   70152 cri.go:89] found id: ""
	I0924 19:48:24.126718   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.126732   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:24.126739   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:24.126794   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:24.157451   70152 cri.go:89] found id: ""
	I0924 19:48:24.157478   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.157490   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:24.157498   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:24.157548   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:24.188313   70152 cri.go:89] found id: ""
	I0924 19:48:24.188340   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.188351   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:24.188359   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:24.188406   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:24.218240   70152 cri.go:89] found id: ""
	I0924 19:48:24.218271   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.218283   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:24.218291   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:24.218348   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:24.249281   70152 cri.go:89] found id: ""
	I0924 19:48:24.249313   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.249324   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:24.249331   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:24.249391   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:24.280160   70152 cri.go:89] found id: ""
	I0924 19:48:24.280182   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.280189   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:24.280194   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:24.280246   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:24.310699   70152 cri.go:89] found id: ""
	I0924 19:48:24.310726   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.310735   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:24.310740   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:24.310792   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:24.346673   70152 cri.go:89] found id: ""
	I0924 19:48:24.346703   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.346715   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:24.346725   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:24.346738   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:24.396068   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:24.396100   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:24.408987   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:24.409014   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:24.477766   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:24.477792   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:24.477805   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:24.556507   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:24.556539   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:23.477283   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:25.477770   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.478124   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:24.491040   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:26.990109   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.954634   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:29.954918   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.099161   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:27.110953   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:27.111027   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:27.143812   70152 cri.go:89] found id: ""
	I0924 19:48:27.143838   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.143846   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:27.143852   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:27.143909   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:27.173741   70152 cri.go:89] found id: ""
	I0924 19:48:27.173766   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.173775   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:27.173780   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:27.173835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:27.203089   70152 cri.go:89] found id: ""
	I0924 19:48:27.203118   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.203128   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:27.203135   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:27.203197   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:27.234206   70152 cri.go:89] found id: ""
	I0924 19:48:27.234232   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.234240   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:27.234247   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:27.234298   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:27.265173   70152 cri.go:89] found id: ""
	I0924 19:48:27.265199   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.265207   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:27.265213   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:27.265274   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:27.294683   70152 cri.go:89] found id: ""
	I0924 19:48:27.294711   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.294722   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:27.294737   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:27.294800   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:27.327766   70152 cri.go:89] found id: ""
	I0924 19:48:27.327796   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.327804   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:27.327810   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:27.327867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:27.358896   70152 cri.go:89] found id: ""
	I0924 19:48:27.358922   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.358932   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:27.358943   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:27.358958   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:27.407245   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:27.407281   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:27.420301   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:27.420333   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:27.483150   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:27.483175   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:27.483190   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:27.558952   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:27.558988   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:30.094672   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:30.107997   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:30.108061   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:30.141210   70152 cri.go:89] found id: ""
	I0924 19:48:30.141238   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.141248   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:30.141256   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:30.141319   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:30.173799   70152 cri.go:89] found id: ""
	I0924 19:48:30.173825   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.173833   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:30.173839   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:30.173900   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:30.206653   70152 cri.go:89] found id: ""
	I0924 19:48:30.206676   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.206684   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:30.206690   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:30.206739   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:30.245268   70152 cri.go:89] found id: ""
	I0924 19:48:30.245296   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.245351   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:30.245363   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:30.245424   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:30.277515   70152 cri.go:89] found id: ""
	I0924 19:48:30.277550   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.277570   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:30.277578   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:30.277646   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:30.309533   70152 cri.go:89] found id: ""
	I0924 19:48:30.309556   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.309564   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:30.309576   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:30.309641   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:30.342113   70152 cri.go:89] found id: ""
	I0924 19:48:30.342133   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.342140   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:30.342146   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:30.342204   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:30.377786   70152 cri.go:89] found id: ""
	I0924 19:48:30.377818   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.377827   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:30.377835   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:30.377846   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:30.429612   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:30.429660   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:30.442864   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:30.442892   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:30.508899   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:30.508917   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:30.508928   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:30.585285   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:30.585316   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:29.978453   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:32.478565   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:29.489398   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:31.490231   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:33.490730   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:32.454775   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:34.455023   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:33.125617   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:33.137771   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:33.137847   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:33.169654   70152 cri.go:89] found id: ""
	I0924 19:48:33.169684   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.169694   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:33.169703   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:33.169769   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:33.205853   70152 cri.go:89] found id: ""
	I0924 19:48:33.205877   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.205884   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:33.205890   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:33.205947   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:33.239008   70152 cri.go:89] found id: ""
	I0924 19:48:33.239037   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.239048   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:33.239056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:33.239114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:33.269045   70152 cri.go:89] found id: ""
	I0924 19:48:33.269077   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.269088   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:33.269096   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:33.269158   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:33.298553   70152 cri.go:89] found id: ""
	I0924 19:48:33.298583   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.298594   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:33.298602   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:33.298663   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:33.329077   70152 cri.go:89] found id: ""
	I0924 19:48:33.329103   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.329114   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:33.329122   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:33.329181   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:33.361366   70152 cri.go:89] found id: ""
	I0924 19:48:33.361397   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.361408   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:33.361416   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:33.361465   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:33.394899   70152 cri.go:89] found id: ""
	I0924 19:48:33.394941   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.394952   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:33.394964   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:33.394978   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:33.446878   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:33.446917   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:33.460382   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:33.460408   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:33.530526   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:33.530546   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:33.530563   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:33.610520   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:33.610559   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:36.152137   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:36.165157   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:36.165225   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:36.196113   70152 cri.go:89] found id: ""
	I0924 19:48:36.196142   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.196151   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:36.196159   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:36.196223   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:36.230743   70152 cri.go:89] found id: ""
	I0924 19:48:36.230770   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.230779   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:36.230786   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:36.230870   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:36.263401   70152 cri.go:89] found id: ""
	I0924 19:48:36.263430   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.263439   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:36.263444   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:36.263492   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:36.298958   70152 cri.go:89] found id: ""
	I0924 19:48:36.298982   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.298991   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:36.298996   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:36.299053   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:36.337604   70152 cri.go:89] found id: ""
	I0924 19:48:36.337636   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.337647   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:36.337654   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:36.337717   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:36.368707   70152 cri.go:89] found id: ""
	I0924 19:48:36.368738   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.368749   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:36.368763   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:36.368833   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:36.400169   70152 cri.go:89] found id: ""
	I0924 19:48:36.400194   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.400204   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:36.400212   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:36.400277   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:36.430959   70152 cri.go:89] found id: ""
	I0924 19:48:36.430987   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.430994   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:36.431003   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:36.431015   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:48:34.478813   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:36.978477   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:35.991034   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:38.489705   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:36.954351   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:39.455405   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	W0924 19:48:36.508356   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:36.508381   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:36.508392   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:36.589376   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:36.589411   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:36.629423   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:36.629453   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:36.679281   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:36.679313   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:39.193627   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:39.207486   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:39.207564   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:39.239864   70152 cri.go:89] found id: ""
	I0924 19:48:39.239888   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.239897   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:39.239902   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:39.239950   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:39.273596   70152 cri.go:89] found id: ""
	I0924 19:48:39.273622   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.273630   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:39.273635   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:39.273685   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:39.305659   70152 cri.go:89] found id: ""
	I0924 19:48:39.305685   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.305696   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:39.305703   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:39.305762   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:39.338060   70152 cri.go:89] found id: ""
	I0924 19:48:39.338091   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.338103   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:39.338110   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:39.338167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:39.369652   70152 cri.go:89] found id: ""
	I0924 19:48:39.369680   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.369688   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:39.369694   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:39.369757   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:39.406342   70152 cri.go:89] found id: ""
	I0924 19:48:39.406365   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.406373   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:39.406379   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:39.406428   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:39.437801   70152 cri.go:89] found id: ""
	I0924 19:48:39.437824   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.437832   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:39.437838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:39.437892   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:39.476627   70152 cri.go:89] found id: ""
	I0924 19:48:39.476651   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.476662   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:39.476672   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:39.476685   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:39.528302   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:39.528332   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:39.540968   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:39.540999   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:39.606690   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:39.606716   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:39.606733   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:39.689060   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:39.689101   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:39.478198   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:41.478531   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:40.489969   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:42.491022   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:41.954586   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:44.454898   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:42.225445   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:42.238188   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:42.238262   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:42.270077   70152 cri.go:89] found id: ""
	I0924 19:48:42.270107   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.270117   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:42.270127   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:42.270189   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:42.301231   70152 cri.go:89] found id: ""
	I0924 19:48:42.301253   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.301261   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:42.301266   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:42.301311   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:42.331554   70152 cri.go:89] found id: ""
	I0924 19:48:42.331586   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.331594   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:42.331602   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:42.331662   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:42.364673   70152 cri.go:89] found id: ""
	I0924 19:48:42.364696   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.364704   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:42.364710   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:42.364755   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:42.396290   70152 cri.go:89] found id: ""
	I0924 19:48:42.396320   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.396331   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:42.396339   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:42.396400   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:42.427249   70152 cri.go:89] found id: ""
	I0924 19:48:42.427277   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.427287   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:42.427295   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:42.427356   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:42.462466   70152 cri.go:89] found id: ""
	I0924 19:48:42.462491   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.462499   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:42.462504   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:42.462557   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:42.496774   70152 cri.go:89] found id: ""
	I0924 19:48:42.496797   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.496805   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:42.496813   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:42.496825   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:42.569996   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:42.570024   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:42.570040   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:42.646881   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:42.646913   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:42.687089   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:42.687112   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:42.739266   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:42.739303   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:45.254320   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:45.266332   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:45.266404   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:45.296893   70152 cri.go:89] found id: ""
	I0924 19:48:45.296923   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.296933   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:45.296940   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:45.297003   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:45.328599   70152 cri.go:89] found id: ""
	I0924 19:48:45.328628   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.328639   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:45.328647   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:45.328704   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:45.361362   70152 cri.go:89] found id: ""
	I0924 19:48:45.361394   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.361404   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:45.361414   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:45.361475   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:45.395296   70152 cri.go:89] found id: ""
	I0924 19:48:45.395341   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.395352   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:45.395360   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:45.395424   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:45.430070   70152 cri.go:89] found id: ""
	I0924 19:48:45.430092   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.430100   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:45.430106   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:45.430151   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:45.463979   70152 cri.go:89] found id: ""
	I0924 19:48:45.464005   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.464015   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:45.464023   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:45.464085   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:45.512245   70152 cri.go:89] found id: ""
	I0924 19:48:45.512276   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.512286   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:45.512293   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:45.512353   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:45.544854   70152 cri.go:89] found id: ""
	I0924 19:48:45.544882   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.544891   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:45.544902   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:45.544915   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:45.580352   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:45.580390   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:45.630992   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:45.631025   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:45.643908   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:45.643936   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:45.715669   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:45.715689   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:45.715703   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:43.478814   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:45.978275   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:44.990088   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:46.990498   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:46.954696   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:49.455032   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:48.296204   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:48.308612   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:48.308675   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:48.339308   70152 cri.go:89] found id: ""
	I0924 19:48:48.339335   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.339345   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:48.339353   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:48.339412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:48.377248   70152 cri.go:89] found id: ""
	I0924 19:48:48.377277   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.377286   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:48.377292   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:48.377354   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:48.414199   70152 cri.go:89] found id: ""
	I0924 19:48:48.414230   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.414238   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:48.414244   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:48.414293   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:48.446262   70152 cri.go:89] found id: ""
	I0924 19:48:48.446291   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.446302   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:48.446309   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:48.446369   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:48.477125   70152 cri.go:89] found id: ""
	I0924 19:48:48.477155   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.477166   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:48.477174   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:48.477233   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:48.520836   70152 cri.go:89] found id: ""
	I0924 19:48:48.520867   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.520876   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:48.520881   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:48.520936   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:48.557787   70152 cri.go:89] found id: ""
	I0924 19:48:48.557818   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.557829   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:48.557838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:48.557897   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:48.589636   70152 cri.go:89] found id: ""
	I0924 19:48:48.589670   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.589682   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:48.589692   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:48.589706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:48.667455   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:48.667486   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:48.704523   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:48.704559   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:48.754194   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:48.754223   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:48.766550   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:48.766576   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:48.833394   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:51.333900   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:51.347028   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:51.347094   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:51.383250   70152 cri.go:89] found id: ""
	I0924 19:48:51.383277   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.383285   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:51.383292   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:51.383356   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:51.415238   70152 cri.go:89] found id: ""
	I0924 19:48:51.415269   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.415282   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:51.415289   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:51.415349   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:51.447358   70152 cri.go:89] found id: ""
	I0924 19:48:51.447388   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.447398   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:51.447407   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:51.447469   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:51.479317   70152 cri.go:89] found id: ""
	I0924 19:48:51.479345   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.479354   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:51.479362   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:51.479423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:48.477928   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:50.978108   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:49.491597   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.989509   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:53.989629   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.954573   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:54.455024   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.511976   70152 cri.go:89] found id: ""
	I0924 19:48:51.512008   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.512016   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:51.512022   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:51.512074   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:51.544785   70152 cri.go:89] found id: ""
	I0924 19:48:51.544816   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.544824   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:51.544834   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:51.544896   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:51.577475   70152 cri.go:89] found id: ""
	I0924 19:48:51.577508   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.577519   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:51.577527   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:51.577599   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:51.612499   70152 cri.go:89] found id: ""
	I0924 19:48:51.612529   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.612540   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:51.612551   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:51.612564   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:51.648429   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:51.648456   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:51.699980   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:51.700010   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:51.714695   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:51.714723   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:51.781872   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:51.781894   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:51.781909   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:54.361191   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:54.373189   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:54.373242   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:54.405816   70152 cri.go:89] found id: ""
	I0924 19:48:54.405844   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.405854   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:54.405862   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:54.405924   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:54.437907   70152 cri.go:89] found id: ""
	I0924 19:48:54.437935   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.437945   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:54.437952   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:54.438013   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:54.472020   70152 cri.go:89] found id: ""
	I0924 19:48:54.472042   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.472054   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:54.472061   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:54.472122   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:54.507185   70152 cri.go:89] found id: ""
	I0924 19:48:54.507206   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.507215   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:54.507220   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:54.507269   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:54.540854   70152 cri.go:89] found id: ""
	I0924 19:48:54.540887   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.540898   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:54.540905   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:54.540973   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:54.572764   70152 cri.go:89] found id: ""
	I0924 19:48:54.572805   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.572816   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:54.572824   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:54.572897   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:54.605525   70152 cri.go:89] found id: ""
	I0924 19:48:54.605565   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.605573   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:54.605579   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:54.605652   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:54.637320   70152 cri.go:89] found id: ""
	I0924 19:48:54.637341   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.637350   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:54.637357   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:54.637367   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:54.691398   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:54.691433   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:54.704780   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:54.704805   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:54.779461   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:54.779487   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:54.779502   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:54.858131   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:54.858168   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:52.978487   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:55.477749   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:57.479091   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:55.989883   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:58.490132   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:56.954088   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:58.954576   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:00.955423   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:57.393677   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:57.406202   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:57.406273   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:57.439351   70152 cri.go:89] found id: ""
	I0924 19:48:57.439381   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.439388   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:57.439394   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:57.439440   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:57.476966   70152 cri.go:89] found id: ""
	I0924 19:48:57.476993   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.477002   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:57.477007   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:57.477064   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:57.510947   70152 cri.go:89] found id: ""
	I0924 19:48:57.510975   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.510986   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:57.510994   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:57.511054   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:57.544252   70152 cri.go:89] found id: ""
	I0924 19:48:57.544277   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.544285   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:57.544292   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:57.544342   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:57.576781   70152 cri.go:89] found id: ""
	I0924 19:48:57.576810   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.576821   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:57.576829   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:57.576892   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:57.614243   70152 cri.go:89] found id: ""
	I0924 19:48:57.614269   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.614277   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:57.614283   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:57.614349   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:57.653477   70152 cri.go:89] found id: ""
	I0924 19:48:57.653506   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.653517   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:57.653524   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:57.653598   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:57.701253   70152 cri.go:89] found id: ""
	I0924 19:48:57.701283   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.701291   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:57.701299   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:57.701311   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:57.721210   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:57.721239   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:57.799693   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:57.799720   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:57.799735   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:57.881561   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:57.881597   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:57.917473   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:57.917506   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:00.471475   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:00.485727   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:00.485801   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:00.518443   70152 cri.go:89] found id: ""
	I0924 19:49:00.518472   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.518483   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:00.518490   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:00.518555   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:00.553964   70152 cri.go:89] found id: ""
	I0924 19:49:00.553991   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.554001   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:00.554009   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:00.554074   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:00.585507   70152 cri.go:89] found id: ""
	I0924 19:49:00.585537   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.585548   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:00.585555   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:00.585614   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:00.618214   70152 cri.go:89] found id: ""
	I0924 19:49:00.618242   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.618253   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:00.618260   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:00.618319   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:00.649042   70152 cri.go:89] found id: ""
	I0924 19:49:00.649069   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.649077   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:00.649083   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:00.649133   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:00.681021   70152 cri.go:89] found id: ""
	I0924 19:49:00.681050   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.681060   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:00.681067   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:00.681128   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:00.712608   70152 cri.go:89] found id: ""
	I0924 19:49:00.712631   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.712640   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:00.712646   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:00.712693   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:00.744523   70152 cri.go:89] found id: ""
	I0924 19:49:00.744561   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.744572   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:00.744584   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:00.744604   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:00.757179   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:00.757202   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:00.822163   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:00.822186   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:00.822197   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:00.897080   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:00.897125   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:00.934120   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:00.934149   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:59.977468   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:01.978394   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:00.491533   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:02.990346   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:03.454971   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:05.954492   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:03.487555   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:03.500318   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:03.500372   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:03.531327   70152 cri.go:89] found id: ""
	I0924 19:49:03.531355   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.531364   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:03.531372   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:03.531437   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:03.563445   70152 cri.go:89] found id: ""
	I0924 19:49:03.563480   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.563491   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:03.563498   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:03.563564   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:03.602093   70152 cri.go:89] found id: ""
	I0924 19:49:03.602118   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.602126   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:03.602134   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:03.602184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:03.633729   70152 cri.go:89] found id: ""
	I0924 19:49:03.633758   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.633769   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:03.633777   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:03.633838   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:03.664122   70152 cri.go:89] found id: ""
	I0924 19:49:03.664144   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.664154   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:03.664162   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:03.664227   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:03.697619   70152 cri.go:89] found id: ""
	I0924 19:49:03.697647   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.697656   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:03.697661   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:03.697714   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:03.729679   70152 cri.go:89] found id: ""
	I0924 19:49:03.729706   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.729714   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:03.729719   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:03.729768   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:03.760459   70152 cri.go:89] found id: ""
	I0924 19:49:03.760489   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.760497   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:03.760505   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:03.760517   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:03.772452   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:03.772475   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:03.836658   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:03.836690   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:03.836706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:03.911243   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:03.911274   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:03.947676   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:03.947699   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:04.478117   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:06.977766   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:04.992137   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:07.490741   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:07.955747   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:10.453756   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:06.501947   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:06.513963   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:06.514037   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:06.546355   70152 cri.go:89] found id: ""
	I0924 19:49:06.546382   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.546393   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:06.546401   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:06.546460   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:06.577502   70152 cri.go:89] found id: ""
	I0924 19:49:06.577530   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.577542   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:06.577554   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:06.577606   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:06.611622   70152 cri.go:89] found id: ""
	I0924 19:49:06.611644   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.611652   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:06.611658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:06.611716   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:06.646558   70152 cri.go:89] found id: ""
	I0924 19:49:06.646581   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.646589   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:06.646594   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:06.646656   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:06.678247   70152 cri.go:89] found id: ""
	I0924 19:49:06.678271   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.678282   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:06.678289   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:06.678351   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:06.718816   70152 cri.go:89] found id: ""
	I0924 19:49:06.718861   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.718874   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:06.718889   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:06.718952   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:06.751762   70152 cri.go:89] found id: ""
	I0924 19:49:06.751787   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.751798   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:06.751806   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:06.751867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:06.783466   70152 cri.go:89] found id: ""
	I0924 19:49:06.783494   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.783502   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:06.783511   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:06.783523   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:06.796746   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:06.796773   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:06.860579   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:06.860608   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:06.860627   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:06.933363   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:06.933394   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:06.973189   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:06.973214   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:09.525823   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:09.537933   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:09.537986   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:09.568463   70152 cri.go:89] found id: ""
	I0924 19:49:09.568492   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.568503   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:09.568511   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:09.568566   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:09.598218   70152 cri.go:89] found id: ""
	I0924 19:49:09.598250   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.598261   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:09.598268   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:09.598325   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:09.631792   70152 cri.go:89] found id: ""
	I0924 19:49:09.631817   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.631828   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:09.631839   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:09.631906   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:09.668544   70152 cri.go:89] found id: ""
	I0924 19:49:09.668578   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.668586   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:09.668592   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:09.668643   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:09.699088   70152 cri.go:89] found id: ""
	I0924 19:49:09.699117   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.699126   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:09.699132   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:09.699192   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:09.731239   70152 cri.go:89] found id: ""
	I0924 19:49:09.731262   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.731273   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:09.731280   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:09.731341   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:09.764349   70152 cri.go:89] found id: ""
	I0924 19:49:09.764372   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.764380   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:09.764386   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:09.764443   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:09.795675   70152 cri.go:89] found id: ""
	I0924 19:49:09.795698   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.795707   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:09.795715   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:09.795733   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:09.829109   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:09.829133   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:09.882630   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:09.882666   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:09.894968   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:09.894992   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:09.955378   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:09.955400   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:09.955415   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:09.477323   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:11.477732   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:09.991122   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.490229   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.453790   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:14.454415   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.537431   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:12.549816   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:12.549878   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:12.585422   70152 cri.go:89] found id: ""
	I0924 19:49:12.585445   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.585453   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:12.585459   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:12.585505   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:12.621367   70152 cri.go:89] found id: ""
	I0924 19:49:12.621391   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.621401   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:12.621408   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:12.621471   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:12.656570   70152 cri.go:89] found id: ""
	I0924 19:49:12.656596   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.656603   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:12.656611   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:12.656671   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:12.691193   70152 cri.go:89] found id: ""
	I0924 19:49:12.691215   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.691225   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:12.691233   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:12.691291   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:12.725507   70152 cri.go:89] found id: ""
	I0924 19:49:12.725535   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.725546   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:12.725554   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:12.725614   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:12.757046   70152 cri.go:89] found id: ""
	I0924 19:49:12.757072   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.757083   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:12.757091   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:12.757148   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:12.787049   70152 cri.go:89] found id: ""
	I0924 19:49:12.787075   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.787083   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:12.787088   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:12.787136   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:12.820797   70152 cri.go:89] found id: ""
	I0924 19:49:12.820823   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.820831   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:12.820841   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:12.820859   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:12.873430   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:12.873462   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:12.886207   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:12.886234   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:12.957602   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:12.957623   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:12.957637   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:13.034776   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:13.034811   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:15.571177   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:15.583916   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:15.583981   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:15.618698   70152 cri.go:89] found id: ""
	I0924 19:49:15.618722   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.618730   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:15.618735   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:15.618787   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:15.653693   70152 cri.go:89] found id: ""
	I0924 19:49:15.653726   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.653747   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:15.653755   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:15.653817   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:15.683926   70152 cri.go:89] found id: ""
	I0924 19:49:15.683955   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.683966   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:15.683974   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:15.684031   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:15.718671   70152 cri.go:89] found id: ""
	I0924 19:49:15.718704   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.718716   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:15.718724   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:15.718784   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:15.748861   70152 cri.go:89] found id: ""
	I0924 19:49:15.748892   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.748904   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:15.748911   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:15.748985   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:15.778209   70152 cri.go:89] found id: ""
	I0924 19:49:15.778241   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.778252   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:15.778259   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:15.778323   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:15.808159   70152 cri.go:89] found id: ""
	I0924 19:49:15.808184   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.808192   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:15.808197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:15.808257   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:15.840960   70152 cri.go:89] found id: ""
	I0924 19:49:15.840987   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.840995   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:15.841003   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:15.841016   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:15.891229   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:15.891259   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:15.903910   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:15.903935   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:15.967036   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:15.967061   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:15.967074   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:16.046511   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:16.046545   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:13.477971   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:15.478378   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:14.990141   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:16.990237   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.990750   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:16.954729   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.954769   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.586369   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:18.598590   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:18.598680   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:18.631438   70152 cri.go:89] found id: ""
	I0924 19:49:18.631465   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.631476   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:18.631484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:18.631545   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:18.663461   70152 cri.go:89] found id: ""
	I0924 19:49:18.663484   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.663491   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:18.663497   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:18.663556   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:18.696292   70152 cri.go:89] found id: ""
	I0924 19:49:18.696373   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.696398   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:18.696411   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:18.696475   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:18.728037   70152 cri.go:89] found id: ""
	I0924 19:49:18.728062   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.728073   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:18.728079   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:18.728139   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:18.759784   70152 cri.go:89] found id: ""
	I0924 19:49:18.759819   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.759830   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:18.759838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:18.759902   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:18.791856   70152 cri.go:89] found id: ""
	I0924 19:49:18.791886   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.791893   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:18.791899   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:18.791959   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:18.822678   70152 cri.go:89] found id: ""
	I0924 19:49:18.822708   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.822719   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:18.822730   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:18.822794   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:18.852967   70152 cri.go:89] found id: ""
	I0924 19:49:18.852988   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.852996   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:18.853005   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:18.853016   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:18.902600   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:18.902634   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:18.915475   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:18.915505   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:18.980260   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:18.980285   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:18.980299   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:19.064950   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:19.064986   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:17.977250   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:19.977563   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.977702   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.490563   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:23.989915   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.454031   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:23.954281   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:25.955057   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.603752   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:21.616039   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:21.616107   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:21.648228   70152 cri.go:89] found id: ""
	I0924 19:49:21.648253   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.648266   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:21.648274   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:21.648331   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:21.679823   70152 cri.go:89] found id: ""
	I0924 19:49:21.679850   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.679858   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:21.679866   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:21.679928   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:21.712860   70152 cri.go:89] found id: ""
	I0924 19:49:21.712886   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.712895   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:21.712900   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:21.712951   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:21.749711   70152 cri.go:89] found id: ""
	I0924 19:49:21.749735   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.749742   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:21.749748   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:21.749793   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:21.784536   70152 cri.go:89] found id: ""
	I0924 19:49:21.784559   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.784567   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:21.784573   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:21.784631   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:21.813864   70152 cri.go:89] found id: ""
	I0924 19:49:21.813896   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.813907   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:21.813916   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:21.813981   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:21.843610   70152 cri.go:89] found id: ""
	I0924 19:49:21.843639   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.843647   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:21.843653   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:21.843704   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:21.874367   70152 cri.go:89] found id: ""
	I0924 19:49:21.874393   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.874401   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:21.874410   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:21.874421   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:21.923539   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:21.923567   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:21.936994   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:21.937018   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:22.004243   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:22.004264   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:22.004277   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:22.079890   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:22.079921   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:24.616140   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:24.628197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:24.628257   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:24.660873   70152 cri.go:89] found id: ""
	I0924 19:49:24.660902   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.660912   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:24.660919   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:24.660978   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:24.691592   70152 cri.go:89] found id: ""
	I0924 19:49:24.691618   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.691627   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:24.691633   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:24.691682   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:24.725803   70152 cri.go:89] found id: ""
	I0924 19:49:24.725835   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.725843   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:24.725849   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:24.725911   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:24.760080   70152 cri.go:89] found id: ""
	I0924 19:49:24.760112   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.760124   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:24.760131   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:24.760198   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:24.792487   70152 cri.go:89] found id: ""
	I0924 19:49:24.792517   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.792527   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:24.792535   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:24.792615   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:24.825037   70152 cri.go:89] found id: ""
	I0924 19:49:24.825058   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.825066   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:24.825072   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:24.825117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:24.857009   70152 cri.go:89] found id: ""
	I0924 19:49:24.857037   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.857048   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:24.857062   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:24.857119   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:24.887963   70152 cri.go:89] found id: ""
	I0924 19:49:24.887986   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.887994   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:24.888001   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:24.888012   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:24.941971   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:24.942008   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:24.956355   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:24.956385   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:25.020643   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:25.020671   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:25.020686   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:25.095261   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:25.095295   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:24.477423   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:26.477967   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:25.990406   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:28.490276   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:28.454466   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.955002   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:27.632228   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:27.645002   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:27.645059   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:27.677386   70152 cri.go:89] found id: ""
	I0924 19:49:27.677411   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.677420   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:27.677427   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:27.677487   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:27.709731   70152 cri.go:89] found id: ""
	I0924 19:49:27.709760   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.709771   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:27.709779   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:27.709846   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:27.741065   70152 cri.go:89] found id: ""
	I0924 19:49:27.741092   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.741100   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:27.741106   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:27.741165   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:27.771493   70152 cri.go:89] found id: ""
	I0924 19:49:27.771515   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.771524   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:27.771531   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:27.771592   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:27.803233   70152 cri.go:89] found id: ""
	I0924 19:49:27.803266   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.803273   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:27.803279   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:27.803341   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:27.837295   70152 cri.go:89] found id: ""
	I0924 19:49:27.837320   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.837331   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:27.837341   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:27.837412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:27.867289   70152 cri.go:89] found id: ""
	I0924 19:49:27.867314   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.867323   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:27.867328   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:27.867374   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:27.896590   70152 cri.go:89] found id: ""
	I0924 19:49:27.896615   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.896623   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:27.896634   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:27.896646   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:27.944564   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:27.944596   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:27.958719   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:27.958740   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:28.028986   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:28.029011   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:28.029027   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:28.103888   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:28.103920   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:30.639148   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:30.651500   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:30.651570   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:30.689449   70152 cri.go:89] found id: ""
	I0924 19:49:30.689472   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.689481   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:30.689488   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:30.689566   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:30.722953   70152 cri.go:89] found id: ""
	I0924 19:49:30.722982   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.722993   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:30.723004   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:30.723057   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:30.760960   70152 cri.go:89] found id: ""
	I0924 19:49:30.760985   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.760996   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:30.761004   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:30.761066   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:30.794784   70152 cri.go:89] found id: ""
	I0924 19:49:30.794812   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.794821   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:30.794842   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:30.794894   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:30.826127   70152 cri.go:89] found id: ""
	I0924 19:49:30.826155   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.826164   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:30.826172   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:30.826235   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:30.857392   70152 cri.go:89] found id: ""
	I0924 19:49:30.857422   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.857432   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:30.857446   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:30.857510   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:30.887561   70152 cri.go:89] found id: ""
	I0924 19:49:30.887588   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.887600   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:30.887622   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:30.887692   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:30.922486   70152 cri.go:89] found id: ""
	I0924 19:49:30.922514   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.922526   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:30.922537   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:30.922551   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:30.972454   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:30.972480   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:30.986873   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:30.986895   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:31.060505   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:31.060525   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:31.060544   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:31.138923   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:31.138955   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:28.977756   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.980419   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.989909   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:32.991815   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:33.454204   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.454890   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:33.674979   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:33.687073   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:33.687149   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:33.719712   70152 cri.go:89] found id: ""
	I0924 19:49:33.719742   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.719751   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:33.719757   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:33.719810   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:33.751183   70152 cri.go:89] found id: ""
	I0924 19:49:33.751210   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.751221   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:33.751229   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:33.751274   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:33.781748   70152 cri.go:89] found id: ""
	I0924 19:49:33.781781   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.781793   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:33.781801   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:33.781873   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:33.813287   70152 cri.go:89] found id: ""
	I0924 19:49:33.813311   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.813319   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:33.813324   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:33.813369   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:33.848270   70152 cri.go:89] found id: ""
	I0924 19:49:33.848299   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.848311   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:33.848319   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:33.848383   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:33.877790   70152 cri.go:89] found id: ""
	I0924 19:49:33.877817   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.877826   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:33.877832   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:33.877890   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:33.911668   70152 cri.go:89] found id: ""
	I0924 19:49:33.911693   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.911701   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:33.911706   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:33.911759   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:33.943924   70152 cri.go:89] found id: ""
	I0924 19:49:33.943952   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.943963   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:33.943974   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:33.943987   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:33.980520   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:33.980560   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:34.031240   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:34.031275   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:34.044180   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:34.044210   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:34.110143   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:34.110165   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:34.110176   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:33.477340   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.478344   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.490449   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:37.989317   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:37.954444   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:39.954569   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:36.694093   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:36.706006   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:36.706080   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:36.738955   70152 cri.go:89] found id: ""
	I0924 19:49:36.738981   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.738990   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:36.738995   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:36.739059   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:36.774414   70152 cri.go:89] found id: ""
	I0924 19:49:36.774437   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.774445   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:36.774451   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:36.774503   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:36.805821   70152 cri.go:89] found id: ""
	I0924 19:49:36.805851   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.805861   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:36.805867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:36.805922   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:36.835128   70152 cri.go:89] found id: ""
	I0924 19:49:36.835154   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.835162   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:36.835168   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:36.835221   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:36.865448   70152 cri.go:89] found id: ""
	I0924 19:49:36.865474   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.865485   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:36.865492   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:36.865552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:36.896694   70152 cri.go:89] found id: ""
	I0924 19:49:36.896722   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.896731   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:36.896736   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:36.896801   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:36.927380   70152 cri.go:89] found id: ""
	I0924 19:49:36.927406   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.927416   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:36.927426   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:36.927484   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:36.957581   70152 cri.go:89] found id: ""
	I0924 19:49:36.957604   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.957614   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:36.957624   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:36.957638   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:37.007182   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:37.007211   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:37.021536   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:37.021561   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:37.092442   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:37.092465   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:37.092477   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:37.167488   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:37.167524   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:39.703778   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:39.715914   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:39.715983   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:39.751296   70152 cri.go:89] found id: ""
	I0924 19:49:39.751319   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.751329   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:39.751341   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:39.751409   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:39.787095   70152 cri.go:89] found id: ""
	I0924 19:49:39.787123   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.787132   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:39.787137   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:39.787184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:39.822142   70152 cri.go:89] found id: ""
	I0924 19:49:39.822164   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.822173   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:39.822179   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:39.822226   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:39.853830   70152 cri.go:89] found id: ""
	I0924 19:49:39.853854   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.853864   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:39.853871   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:39.853932   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:39.891029   70152 cri.go:89] found id: ""
	I0924 19:49:39.891079   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.891091   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:39.891100   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:39.891162   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:39.926162   70152 cri.go:89] found id: ""
	I0924 19:49:39.926194   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.926204   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:39.926211   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:39.926262   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:39.964320   70152 cri.go:89] found id: ""
	I0924 19:49:39.964348   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.964358   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:39.964365   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:39.964421   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:39.997596   70152 cri.go:89] found id: ""
	I0924 19:49:39.997617   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.997627   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:39.997636   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:39.997649   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:40.045538   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:40.045568   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:40.058114   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:40.058139   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:40.125927   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:40.125946   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:40.125958   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:40.202722   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:40.202758   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:37.978393   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:40.476855   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.477425   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:39.990444   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:41.991094   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.454568   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:44.953805   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.742707   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:42.754910   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:42.754986   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:42.788775   70152 cri.go:89] found id: ""
	I0924 19:49:42.788798   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.788807   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:42.788813   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:42.788875   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:42.824396   70152 cri.go:89] found id: ""
	I0924 19:49:42.824420   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.824430   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:42.824436   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:42.824498   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:42.854848   70152 cri.go:89] found id: ""
	I0924 19:49:42.854873   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.854880   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:42.854886   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:42.854936   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:42.885033   70152 cri.go:89] found id: ""
	I0924 19:49:42.885056   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.885063   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:42.885069   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:42.885114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:42.914427   70152 cri.go:89] found id: ""
	I0924 19:49:42.914451   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.914458   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:42.914464   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:42.914509   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:42.954444   70152 cri.go:89] found id: ""
	I0924 19:49:42.954471   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.954481   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:42.954488   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:42.954544   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:42.998183   70152 cri.go:89] found id: ""
	I0924 19:49:42.998207   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.998215   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:42.998220   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:42.998273   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:43.041904   70152 cri.go:89] found id: ""
	I0924 19:49:43.041933   70152 logs.go:276] 0 containers: []
	W0924 19:49:43.041944   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:43.041957   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:43.041973   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:43.091733   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:43.091770   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:43.104674   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:43.104707   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:43.169712   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:43.169732   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:43.169745   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:43.248378   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:43.248409   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:45.790015   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:45.801902   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:45.801972   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:45.833030   70152 cri.go:89] found id: ""
	I0924 19:49:45.833053   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.833061   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:45.833066   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:45.833117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:45.863209   70152 cri.go:89] found id: ""
	I0924 19:49:45.863233   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.863241   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:45.863247   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:45.863307   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:45.893004   70152 cri.go:89] found id: ""
	I0924 19:49:45.893035   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.893045   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:45.893053   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:45.893114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:45.924485   70152 cri.go:89] found id: ""
	I0924 19:49:45.924515   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.924527   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:45.924535   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:45.924593   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:45.956880   70152 cri.go:89] found id: ""
	I0924 19:49:45.956907   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.956914   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:45.956919   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:45.956967   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:45.990579   70152 cri.go:89] found id: ""
	I0924 19:49:45.990602   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.990614   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:45.990622   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:45.990677   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:46.025905   70152 cri.go:89] found id: ""
	I0924 19:49:46.025944   70152 logs.go:276] 0 containers: []
	W0924 19:49:46.025959   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:46.025966   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:46.026028   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:46.057401   70152 cri.go:89] found id: ""
	I0924 19:49:46.057427   70152 logs.go:276] 0 containers: []
	W0924 19:49:46.057438   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:46.057449   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:46.057463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:46.107081   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:46.107115   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:46.121398   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:46.121426   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:46.184370   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:46.184395   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:46.184410   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:46.266061   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:46.266104   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:44.477907   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.478391   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:44.489995   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.989227   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.990995   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.953875   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.955013   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.803970   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:48.816671   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:48.816737   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:48.849566   70152 cri.go:89] found id: ""
	I0924 19:49:48.849628   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.849652   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:48.849660   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:48.849720   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:48.885963   70152 cri.go:89] found id: ""
	I0924 19:49:48.885992   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.885999   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:48.886004   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:48.886054   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:48.921710   70152 cri.go:89] found id: ""
	I0924 19:49:48.921744   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.921755   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:48.921765   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:48.921821   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:48.954602   70152 cri.go:89] found id: ""
	I0924 19:49:48.954639   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.954650   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:48.954658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:48.954718   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:48.988071   70152 cri.go:89] found id: ""
	I0924 19:49:48.988098   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.988109   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:48.988117   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:48.988177   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:49.020475   70152 cri.go:89] found id: ""
	I0924 19:49:49.020503   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.020512   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:49.020519   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:49.020597   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:49.055890   70152 cri.go:89] found id: ""
	I0924 19:49:49.055915   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.055925   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:49.055933   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:49.055999   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:49.092976   70152 cri.go:89] found id: ""
	I0924 19:49:49.093010   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.093022   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:49.093033   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:49.093051   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:49.106598   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:49.106623   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:49.175320   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:49.175349   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:49.175362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:49.252922   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:49.252953   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:49.292364   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:49.292391   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:48.977530   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:50.978078   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.489983   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:53.990114   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.454857   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:53.954413   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:55.955245   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.843520   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:51.855864   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:51.855930   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:51.885300   70152 cri.go:89] found id: ""
	I0924 19:49:51.885329   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.885342   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:51.885350   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:51.885407   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:51.915183   70152 cri.go:89] found id: ""
	I0924 19:49:51.915212   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.915223   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:51.915230   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:51.915286   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:51.944774   70152 cri.go:89] found id: ""
	I0924 19:49:51.944797   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.944807   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:51.944815   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:51.944886   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:51.983691   70152 cri.go:89] found id: ""
	I0924 19:49:51.983718   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.983729   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:51.983737   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:51.983791   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:52.019728   70152 cri.go:89] found id: ""
	I0924 19:49:52.019760   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.019770   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:52.019776   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:52.019835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:52.055405   70152 cri.go:89] found id: ""
	I0924 19:49:52.055435   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.055446   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:52.055453   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:52.055518   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:52.088417   70152 cri.go:89] found id: ""
	I0924 19:49:52.088447   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.088457   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:52.088465   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:52.088527   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:52.119496   70152 cri.go:89] found id: ""
	I0924 19:49:52.119527   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.119539   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:52.119550   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:52.119563   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:52.193494   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:52.193529   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:52.231440   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:52.231464   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:52.281384   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:52.281418   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:52.293893   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:52.293919   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:52.362404   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:54.863156   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:54.876871   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:54.876946   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:54.909444   70152 cri.go:89] found id: ""
	I0924 19:49:54.909471   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.909478   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:54.909484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:54.909536   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:54.939687   70152 cri.go:89] found id: ""
	I0924 19:49:54.939715   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.939726   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:54.939733   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:54.939805   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:54.971156   70152 cri.go:89] found id: ""
	I0924 19:49:54.971180   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.971188   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:54.971193   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:54.971244   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:55.001865   70152 cri.go:89] found id: ""
	I0924 19:49:55.001891   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.001899   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:55.001904   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:55.001961   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:55.032044   70152 cri.go:89] found id: ""
	I0924 19:49:55.032072   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.032084   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:55.032092   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:55.032152   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:55.061644   70152 cri.go:89] found id: ""
	I0924 19:49:55.061667   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.061675   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:55.061681   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:55.061727   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:55.093015   70152 cri.go:89] found id: ""
	I0924 19:49:55.093041   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.093049   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:55.093055   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:55.093121   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:55.126041   70152 cri.go:89] found id: ""
	I0924 19:49:55.126065   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.126073   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:55.126081   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:55.126091   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:55.168803   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:55.168826   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:55.227121   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:55.227158   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:55.249868   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:55.249893   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:55.316401   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:55.316422   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:55.316434   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:52.978705   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:55.478802   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:56.489685   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:58.990273   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:58.453854   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.954407   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:57.898654   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:57.910667   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:57.910728   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:57.942696   70152 cri.go:89] found id: ""
	I0924 19:49:57.942722   70152 logs.go:276] 0 containers: []
	W0924 19:49:57.942730   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:57.942736   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:57.942802   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:57.981222   70152 cri.go:89] found id: ""
	I0924 19:49:57.981244   70152 logs.go:276] 0 containers: []
	W0924 19:49:57.981254   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:57.981261   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:57.981308   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:58.013135   70152 cri.go:89] found id: ""
	I0924 19:49:58.013174   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.013185   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:58.013193   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:58.013255   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:58.048815   70152 cri.go:89] found id: ""
	I0924 19:49:58.048847   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.048859   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:58.048867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:58.048933   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:58.081365   70152 cri.go:89] found id: ""
	I0924 19:49:58.081395   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.081406   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:58.081413   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:58.081478   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:58.112804   70152 cri.go:89] found id: ""
	I0924 19:49:58.112828   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.112838   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:58.112848   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:58.112913   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:58.147412   70152 cri.go:89] found id: ""
	I0924 19:49:58.147448   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.147459   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:58.147467   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:58.147529   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:58.178922   70152 cri.go:89] found id: ""
	I0924 19:49:58.178952   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.178963   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:58.178974   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:58.178993   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:58.250967   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:58.250993   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:58.251011   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:58.329734   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:58.329767   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:58.366692   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:58.366722   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:58.418466   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:58.418503   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:00.931624   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:00.949687   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:00.949756   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:01.004428   70152 cri.go:89] found id: ""
	I0924 19:50:01.004456   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.004464   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:01.004471   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:01.004532   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:01.038024   70152 cri.go:89] found id: ""
	I0924 19:50:01.038050   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.038060   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:01.038065   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:01.038111   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:01.069831   70152 cri.go:89] found id: ""
	I0924 19:50:01.069855   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.069862   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:01.069867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:01.069933   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:01.100918   70152 cri.go:89] found id: ""
	I0924 19:50:01.100944   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.100951   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:01.100957   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:01.101006   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:01.131309   70152 cri.go:89] found id: ""
	I0924 19:50:01.131340   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.131351   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:01.131359   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:01.131419   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:01.161779   70152 cri.go:89] found id: ""
	I0924 19:50:01.161806   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.161817   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:01.161825   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:01.161888   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:01.196626   70152 cri.go:89] found id: ""
	I0924 19:50:01.196655   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.196665   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:01.196672   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:01.196733   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:01.226447   70152 cri.go:89] found id: ""
	I0924 19:50:01.226475   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.226486   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:01.226496   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:01.226510   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:01.279093   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:01.279121   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:01.292435   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:01.292463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:01.360868   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:01.360901   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:01.360917   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:01.442988   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:01.443021   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:57.978989   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.477211   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:02.477451   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.990593   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:03.489738   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:02.955427   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:05.455000   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:03.984021   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:03.997429   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:03.997508   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:04.030344   70152 cri.go:89] found id: ""
	I0924 19:50:04.030374   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.030387   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:04.030395   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:04.030448   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:04.063968   70152 cri.go:89] found id: ""
	I0924 19:50:04.064003   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.064016   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:04.064023   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:04.064083   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:04.097724   70152 cri.go:89] found id: ""
	I0924 19:50:04.097752   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.097764   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:04.097772   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:04.097825   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:04.129533   70152 cri.go:89] found id: ""
	I0924 19:50:04.129570   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.129580   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:04.129588   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:04.129665   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:04.166056   70152 cri.go:89] found id: ""
	I0924 19:50:04.166086   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.166098   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:04.166105   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:04.166164   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:04.200051   70152 cri.go:89] found id: ""
	I0924 19:50:04.200077   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.200087   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:04.200094   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:04.200205   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:04.232647   70152 cri.go:89] found id: ""
	I0924 19:50:04.232671   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.232679   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:04.232686   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:04.232744   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:04.264091   70152 cri.go:89] found id: ""
	I0924 19:50:04.264115   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.264123   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:04.264131   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:04.264140   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:04.313904   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:04.313939   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:04.326759   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:04.326782   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:04.390347   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:04.390372   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:04.390389   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:04.470473   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:04.470509   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:04.478092   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:06.976928   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:05.490259   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.490644   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.954747   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:10.455548   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.009267   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:07.022465   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:07.022534   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:07.053438   70152 cri.go:89] found id: ""
	I0924 19:50:07.053466   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.053476   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:07.053484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:07.053552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:07.085802   70152 cri.go:89] found id: ""
	I0924 19:50:07.085824   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.085833   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:07.085840   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:07.085903   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:07.121020   70152 cri.go:89] found id: ""
	I0924 19:50:07.121043   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.121051   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:07.121056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:07.121108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:07.150529   70152 cri.go:89] found id: ""
	I0924 19:50:07.150557   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.150568   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:07.150576   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:07.150663   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:07.181915   70152 cri.go:89] found id: ""
	I0924 19:50:07.181942   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.181953   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:07.181959   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:07.182021   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:07.215152   70152 cri.go:89] found id: ""
	I0924 19:50:07.215185   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.215195   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:07.215203   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:07.215263   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:07.248336   70152 cri.go:89] found id: ""
	I0924 19:50:07.248365   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.248373   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:07.248378   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:07.248423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:07.281829   70152 cri.go:89] found id: ""
	I0924 19:50:07.281854   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.281862   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:07.281871   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:07.281885   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:07.329674   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:07.329706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:07.342257   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:07.342283   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:07.406426   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:07.406452   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:07.406466   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:07.493765   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:07.493796   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:10.033393   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:10.046435   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:10.046513   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:10.077993   70152 cri.go:89] found id: ""
	I0924 19:50:10.078024   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.078034   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:10.078044   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:10.078108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:10.115200   70152 cri.go:89] found id: ""
	I0924 19:50:10.115232   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.115243   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:10.115251   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:10.115317   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:10.151154   70152 cri.go:89] found id: ""
	I0924 19:50:10.151179   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.151189   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:10.151197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:10.151254   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:10.184177   70152 cri.go:89] found id: ""
	I0924 19:50:10.184204   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.184212   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:10.184218   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:10.184268   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:10.218932   70152 cri.go:89] found id: ""
	I0924 19:50:10.218962   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.218973   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:10.218981   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:10.219042   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:10.250973   70152 cri.go:89] found id: ""
	I0924 19:50:10.251001   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.251012   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:10.251020   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:10.251076   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:10.280296   70152 cri.go:89] found id: ""
	I0924 19:50:10.280319   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.280328   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:10.280333   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:10.280385   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:10.312386   70152 cri.go:89] found id: ""
	I0924 19:50:10.312411   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.312419   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:10.312426   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:10.312437   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:10.377281   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:10.377309   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:10.377326   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:10.451806   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:10.451839   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:10.489154   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:10.489184   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:10.536203   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:10.536233   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:08.977378   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:10.977966   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:09.990141   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:11.990257   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:13.990360   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:12.954861   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.454763   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:13.049785   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:13.062642   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:13.062720   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:13.096627   70152 cri.go:89] found id: ""
	I0924 19:50:13.096658   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.096669   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:13.096680   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:13.096743   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:13.127361   70152 cri.go:89] found id: ""
	I0924 19:50:13.127389   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.127400   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:13.127409   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:13.127468   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:13.160081   70152 cri.go:89] found id: ""
	I0924 19:50:13.160111   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.160123   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:13.160131   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:13.160184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:13.192955   70152 cri.go:89] found id: ""
	I0924 19:50:13.192986   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.192997   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:13.193004   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:13.193057   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:13.230978   70152 cri.go:89] found id: ""
	I0924 19:50:13.231000   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.231008   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:13.231014   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:13.231064   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:13.262146   70152 cri.go:89] found id: ""
	I0924 19:50:13.262179   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.262190   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:13.262198   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:13.262258   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:13.297019   70152 cri.go:89] found id: ""
	I0924 19:50:13.297054   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.297063   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:13.297070   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:13.297117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:13.327009   70152 cri.go:89] found id: ""
	I0924 19:50:13.327037   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.327046   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:13.327057   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:13.327073   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:13.375465   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:13.375493   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:13.389851   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:13.389884   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:13.452486   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:13.452524   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:13.452538   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:13.531372   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:13.531405   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:16.066979   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:16.079767   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:16.079825   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:16.110927   70152 cri.go:89] found id: ""
	I0924 19:50:16.110951   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.110960   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:16.110965   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:16.111011   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:16.142012   70152 cri.go:89] found id: ""
	I0924 19:50:16.142040   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.142050   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:16.142055   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:16.142112   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:16.175039   70152 cri.go:89] found id: ""
	I0924 19:50:16.175068   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.175079   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:16.175086   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:16.175146   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:16.206778   70152 cri.go:89] found id: ""
	I0924 19:50:16.206800   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.206808   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:16.206814   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:16.206890   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:16.237724   70152 cri.go:89] found id: ""
	I0924 19:50:16.237752   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.237763   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:16.237770   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:16.237835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:16.268823   70152 cri.go:89] found id: ""
	I0924 19:50:16.268846   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.268855   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:16.268861   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:16.268931   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:16.301548   70152 cri.go:89] found id: ""
	I0924 19:50:16.301570   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.301578   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:16.301584   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:16.301635   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:16.334781   70152 cri.go:89] found id: ""
	I0924 19:50:16.334812   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.334820   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:16.334844   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:16.334864   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:16.384025   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:16.384057   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:16.396528   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:16.396556   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:16.460428   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:16.460458   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:16.460472   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:12.978203   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.477525   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.478192   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.990394   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.991181   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.955580   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:20.455446   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:16.541109   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:16.541146   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:19.078388   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:19.090964   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:19.091052   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:19.122890   70152 cri.go:89] found id: ""
	I0924 19:50:19.122915   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.122923   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:19.122928   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:19.122988   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:19.155983   70152 cri.go:89] found id: ""
	I0924 19:50:19.156013   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.156024   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:19.156031   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:19.156085   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:19.190366   70152 cri.go:89] found id: ""
	I0924 19:50:19.190389   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.190397   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:19.190403   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:19.190459   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:19.221713   70152 cri.go:89] found id: ""
	I0924 19:50:19.221737   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.221745   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:19.221751   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:19.221809   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:19.256586   70152 cri.go:89] found id: ""
	I0924 19:50:19.256615   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.256625   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:19.256637   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:19.256700   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:19.288092   70152 cri.go:89] found id: ""
	I0924 19:50:19.288119   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.288130   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:19.288141   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:19.288204   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:19.320743   70152 cri.go:89] found id: ""
	I0924 19:50:19.320771   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.320780   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:19.320785   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:19.320837   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:19.352967   70152 cri.go:89] found id: ""
	I0924 19:50:19.352999   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.353009   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:19.353019   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:19.353035   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:19.365690   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:19.365715   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:19.431204   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:19.431229   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:19.431244   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:19.512030   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:19.512063   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:19.549631   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:19.549664   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:19.977859   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:21.978267   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:20.489819   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.490667   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.954178   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:24.954267   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.105290   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:22.117532   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:22.117607   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:22.147959   70152 cri.go:89] found id: ""
	I0924 19:50:22.147983   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.147994   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:22.148002   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:22.148060   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:22.178511   70152 cri.go:89] found id: ""
	I0924 19:50:22.178540   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.178551   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:22.178556   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:22.178603   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:22.210030   70152 cri.go:89] found id: ""
	I0924 19:50:22.210054   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.210061   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:22.210067   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:22.210125   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:22.243010   70152 cri.go:89] found id: ""
	I0924 19:50:22.243037   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.243048   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:22.243056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:22.243117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:22.273021   70152 cri.go:89] found id: ""
	I0924 19:50:22.273051   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.273062   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:22.273069   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:22.273133   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:22.303372   70152 cri.go:89] found id: ""
	I0924 19:50:22.303403   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.303415   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:22.303422   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:22.303481   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:22.335124   70152 cri.go:89] found id: ""
	I0924 19:50:22.335150   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.335158   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:22.335164   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:22.335222   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:22.368230   70152 cri.go:89] found id: ""
	I0924 19:50:22.368255   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.368265   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:22.368276   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:22.368290   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:22.418998   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:22.419031   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:22.431654   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:22.431684   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:22.505336   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:22.505354   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:22.505367   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:22.584941   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:22.584976   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:25.127489   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:25.140142   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:25.140216   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:25.169946   70152 cri.go:89] found id: ""
	I0924 19:50:25.169974   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.169982   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:25.169988   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:25.170049   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:25.203298   70152 cri.go:89] found id: ""
	I0924 19:50:25.203328   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.203349   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:25.203357   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:25.203419   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:25.236902   70152 cri.go:89] found id: ""
	I0924 19:50:25.236930   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.236941   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:25.236949   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:25.237011   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:25.268295   70152 cri.go:89] found id: ""
	I0924 19:50:25.268318   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.268328   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:25.268333   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:25.268388   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:25.299869   70152 cri.go:89] found id: ""
	I0924 19:50:25.299899   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.299911   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:25.299919   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:25.299978   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:25.332373   70152 cri.go:89] found id: ""
	I0924 19:50:25.332400   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.332411   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:25.332418   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:25.332477   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:25.365791   70152 cri.go:89] found id: ""
	I0924 19:50:25.365820   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.365831   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:25.365839   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:25.365904   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:25.398170   70152 cri.go:89] found id: ""
	I0924 19:50:25.398193   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.398201   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:25.398209   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:25.398220   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:25.447933   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:25.447967   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:25.461244   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:25.461269   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:25.528100   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:25.528125   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:25.528138   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:25.603029   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:25.603062   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:24.477585   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:26.477776   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:24.491205   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:26.990562   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:27.454650   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:29.954657   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:28.141635   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:28.154551   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:28.154611   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:28.186275   70152 cri.go:89] found id: ""
	I0924 19:50:28.186299   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.186307   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:28.186312   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:28.186371   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:28.218840   70152 cri.go:89] found id: ""
	I0924 19:50:28.218868   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.218879   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:28.218887   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:28.218955   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:28.253478   70152 cri.go:89] found id: ""
	I0924 19:50:28.253503   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.253512   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:28.253519   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:28.253579   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:28.284854   70152 cri.go:89] found id: ""
	I0924 19:50:28.284888   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.284899   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:28.284908   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:28.284959   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:28.315453   70152 cri.go:89] found id: ""
	I0924 19:50:28.315478   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.315487   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:28.315500   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:28.315550   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:28.347455   70152 cri.go:89] found id: ""
	I0924 19:50:28.347484   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.347492   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:28.347498   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:28.347552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:28.383651   70152 cri.go:89] found id: ""
	I0924 19:50:28.383683   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.383694   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:28.383702   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:28.383766   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:28.424649   70152 cri.go:89] found id: ""
	I0924 19:50:28.424682   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.424693   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:28.424704   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:28.424718   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:28.477985   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:28.478020   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:28.490902   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:28.490930   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:28.561252   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:28.561273   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:28.561284   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:28.635590   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:28.635635   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:31.172062   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:31.184868   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:31.184939   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:31.216419   70152 cri.go:89] found id: ""
	I0924 19:50:31.216446   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.216456   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:31.216464   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:31.216525   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:31.252757   70152 cri.go:89] found id: ""
	I0924 19:50:31.252787   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.252797   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:31.252804   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:31.252867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:31.287792   70152 cri.go:89] found id: ""
	I0924 19:50:31.287820   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.287827   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:31.287833   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:31.287883   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:31.322891   70152 cri.go:89] found id: ""
	I0924 19:50:31.322917   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.322927   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:31.322934   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:31.322997   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:31.358353   70152 cri.go:89] found id: ""
	I0924 19:50:31.358384   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.358394   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:31.358401   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:31.358461   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:31.388617   70152 cri.go:89] found id: ""
	I0924 19:50:31.388643   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.388654   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:31.388661   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:31.388714   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:31.421655   70152 cri.go:89] found id: ""
	I0924 19:50:31.421682   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.421690   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:31.421695   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:31.421747   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:31.456995   70152 cri.go:89] found id: ""
	I0924 19:50:31.457020   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.457029   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:31.457037   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:31.457048   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:28.478052   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:30.977483   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:29.490310   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:31.990052   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:33.991439   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:32.454421   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:34.456333   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:31.507691   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:31.507725   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:31.521553   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:31.521582   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:31.587673   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:31.587695   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:31.587710   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:31.674153   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:31.674193   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:34.213947   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:34.227779   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:34.227852   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:34.265513   70152 cri.go:89] found id: ""
	I0924 19:50:34.265541   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.265568   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:34.265575   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:34.265632   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:34.305317   70152 cri.go:89] found id: ""
	I0924 19:50:34.305340   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.305348   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:34.305354   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:34.305402   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:34.341144   70152 cri.go:89] found id: ""
	I0924 19:50:34.341168   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.341176   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:34.341183   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:34.341232   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:34.372469   70152 cri.go:89] found id: ""
	I0924 19:50:34.372491   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.372499   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:34.372505   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:34.372551   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:34.408329   70152 cri.go:89] found id: ""
	I0924 19:50:34.408351   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.408360   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:34.408365   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:34.408423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:34.440666   70152 cri.go:89] found id: ""
	I0924 19:50:34.440695   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.440707   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:34.440714   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:34.440782   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:34.475013   70152 cri.go:89] found id: ""
	I0924 19:50:34.475040   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.475047   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:34.475053   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:34.475105   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:34.507051   70152 cri.go:89] found id: ""
	I0924 19:50:34.507077   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.507084   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:34.507092   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:34.507102   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:34.562506   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:34.562549   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:34.575316   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:34.575340   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:34.641903   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:34.641927   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:34.641938   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:34.719868   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:34.719903   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:32.978271   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:35.477581   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:37.479350   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:36.490263   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:38.490795   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:36.953906   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:38.955474   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:37.279465   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:37.291991   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:37.292065   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:37.322097   70152 cri.go:89] found id: ""
	I0924 19:50:37.322123   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.322134   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:37.322141   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:37.322199   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:37.353697   70152 cri.go:89] found id: ""
	I0924 19:50:37.353729   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.353740   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:37.353748   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:37.353807   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:37.385622   70152 cri.go:89] found id: ""
	I0924 19:50:37.385653   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.385664   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:37.385672   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:37.385735   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:37.420972   70152 cri.go:89] found id: ""
	I0924 19:50:37.420995   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.421004   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:37.421012   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:37.421070   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:37.451496   70152 cri.go:89] found id: ""
	I0924 19:50:37.451523   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.451534   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:37.451541   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:37.451619   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:37.486954   70152 cri.go:89] found id: ""
	I0924 19:50:37.486982   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.486992   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:37.487000   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:37.487061   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:37.523068   70152 cri.go:89] found id: ""
	I0924 19:50:37.523089   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.523097   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:37.523105   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:37.523165   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:37.559935   70152 cri.go:89] found id: ""
	I0924 19:50:37.559962   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.559970   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:37.559978   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:37.559988   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:37.597976   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:37.598006   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:37.647577   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:37.647610   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:37.660872   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:37.660901   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:37.728264   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:37.728293   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:37.728307   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:40.308026   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:40.320316   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:40.320373   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:40.357099   70152 cri.go:89] found id: ""
	I0924 19:50:40.357127   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.357137   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:40.357145   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:40.357207   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:40.390676   70152 cri.go:89] found id: ""
	I0924 19:50:40.390703   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.390712   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:40.390717   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:40.390766   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:40.422752   70152 cri.go:89] found id: ""
	I0924 19:50:40.422784   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.422796   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:40.422804   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:40.422887   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:40.457024   70152 cri.go:89] found id: ""
	I0924 19:50:40.457046   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.457054   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:40.457059   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:40.457106   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:40.503120   70152 cri.go:89] found id: ""
	I0924 19:50:40.503149   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.503160   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:40.503168   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:40.503225   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:40.543399   70152 cri.go:89] found id: ""
	I0924 19:50:40.543426   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.543435   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:40.543441   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:40.543487   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:40.577654   70152 cri.go:89] found id: ""
	I0924 19:50:40.577679   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.577690   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:40.577698   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:40.577754   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:40.610097   70152 cri.go:89] found id: ""
	I0924 19:50:40.610120   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.610128   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:40.610136   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:40.610145   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:40.661400   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:40.661436   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:40.674254   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:40.674284   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:40.740319   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:40.740342   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:40.740352   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:40.818666   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:40.818704   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:39.979184   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:41.981561   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:40.491417   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:42.991420   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:41.454480   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:43.456158   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:45.955070   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:43.356693   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:43.369234   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:43.369295   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:43.407933   70152 cri.go:89] found id: ""
	I0924 19:50:43.407960   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.407971   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:43.407978   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:43.408037   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:43.442923   70152 cri.go:89] found id: ""
	I0924 19:50:43.442956   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.442968   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:43.442979   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:43.443029   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:43.478148   70152 cri.go:89] found id: ""
	I0924 19:50:43.478177   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.478189   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:43.478197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:43.478256   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:43.515029   70152 cri.go:89] found id: ""
	I0924 19:50:43.515060   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.515071   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:43.515079   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:43.515144   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:43.551026   70152 cri.go:89] found id: ""
	I0924 19:50:43.551058   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.551070   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:43.551077   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:43.551140   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:43.587155   70152 cri.go:89] found id: ""
	I0924 19:50:43.587188   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.587197   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:43.587205   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:43.587263   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:43.620935   70152 cri.go:89] found id: ""
	I0924 19:50:43.620958   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.620976   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:43.620984   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:43.621045   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:43.654477   70152 cri.go:89] found id: ""
	I0924 19:50:43.654512   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.654523   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:43.654534   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:43.654546   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:43.689352   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:43.689385   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:43.742646   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:43.742683   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:43.755773   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:43.755798   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:43.818546   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:43.818577   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:43.818595   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:46.397466   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:46.410320   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:46.410392   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:46.443003   70152 cri.go:89] found id: ""
	I0924 19:50:46.443029   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.443041   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:46.443049   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:46.443114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:46.484239   70152 cri.go:89] found id: ""
	I0924 19:50:46.484264   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.484274   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:46.484282   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:46.484339   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:43.981787   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:46.478489   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:45.489723   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:47.491171   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:47.955545   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:50.454211   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:46.519192   70152 cri.go:89] found id: ""
	I0924 19:50:46.519221   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.519230   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:46.519236   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:46.519286   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:46.554588   70152 cri.go:89] found id: ""
	I0924 19:50:46.554611   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.554619   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:46.554626   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:46.554685   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:46.586074   70152 cri.go:89] found id: ""
	I0924 19:50:46.586101   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.586110   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:46.586116   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:46.586167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:46.620119   70152 cri.go:89] found id: ""
	I0924 19:50:46.620149   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.620159   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:46.620166   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:46.620226   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:46.653447   70152 cri.go:89] found id: ""
	I0924 19:50:46.653477   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.653488   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:46.653495   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:46.653557   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:46.686079   70152 cri.go:89] found id: ""
	I0924 19:50:46.686105   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.686116   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:46.686127   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:46.686140   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:46.699847   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:46.699891   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:46.766407   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:46.766432   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:46.766447   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:46.846697   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:46.846730   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:46.901551   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:46.901578   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:49.460047   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:49.473516   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:49.473586   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:49.508180   70152 cri.go:89] found id: ""
	I0924 19:50:49.508211   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.508220   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:49.508226   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:49.508289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:49.540891   70152 cri.go:89] found id: ""
	I0924 19:50:49.540920   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.540928   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:49.540934   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:49.540984   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:49.577008   70152 cri.go:89] found id: ""
	I0924 19:50:49.577038   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.577048   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:49.577054   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:49.577132   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:49.615176   70152 cri.go:89] found id: ""
	I0924 19:50:49.615206   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.615216   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:49.615226   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:49.615289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:49.653135   70152 cri.go:89] found id: ""
	I0924 19:50:49.653167   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.653177   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:49.653184   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:49.653250   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:49.691032   70152 cri.go:89] found id: ""
	I0924 19:50:49.691064   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.691074   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:49.691080   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:49.691143   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:49.725243   70152 cri.go:89] found id: ""
	I0924 19:50:49.725274   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.725287   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:49.725294   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:49.725363   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:49.759288   70152 cri.go:89] found id: ""
	I0924 19:50:49.759316   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.759325   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:49.759333   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:49.759345   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:49.831323   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:49.831345   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:49.831362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:49.907302   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:49.907336   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:49.946386   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:49.946424   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:50.002321   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:50.002362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:48.978153   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:51.477442   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:49.991214   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.490034   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.454585   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:54.455120   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.517380   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:52.531613   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:52.531671   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:52.568158   70152 cri.go:89] found id: ""
	I0924 19:50:52.568188   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.568199   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:52.568207   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:52.568258   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:52.606203   70152 cri.go:89] found id: ""
	I0924 19:50:52.606232   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.606241   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:52.606247   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:52.606307   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:52.647180   70152 cri.go:89] found id: ""
	I0924 19:50:52.647206   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.647218   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:52.647226   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:52.647290   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:52.692260   70152 cri.go:89] found id: ""
	I0924 19:50:52.692289   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.692308   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:52.692316   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:52.692382   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:52.745648   70152 cri.go:89] found id: ""
	I0924 19:50:52.745673   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.745684   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:52.745693   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:52.745759   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:52.782429   70152 cri.go:89] found id: ""
	I0924 19:50:52.782451   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.782458   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:52.782463   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:52.782510   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:52.817286   70152 cri.go:89] found id: ""
	I0924 19:50:52.817312   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.817320   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:52.817326   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:52.817387   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:52.851401   70152 cri.go:89] found id: ""
	I0924 19:50:52.851433   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.851442   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:52.851451   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:52.851463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:52.921634   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:52.921661   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:52.921674   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:53.005676   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:53.005710   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:53.042056   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:53.042092   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:53.092871   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:53.092908   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:55.605865   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:55.618713   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:55.618791   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:55.652326   70152 cri.go:89] found id: ""
	I0924 19:50:55.652354   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.652364   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:55.652372   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:55.652434   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:55.686218   70152 cri.go:89] found id: ""
	I0924 19:50:55.686241   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.686249   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:55.686256   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:55.686318   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:55.718678   70152 cri.go:89] found id: ""
	I0924 19:50:55.718704   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.718713   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:55.718720   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:55.718789   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:55.750122   70152 cri.go:89] found id: ""
	I0924 19:50:55.750149   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.750157   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:55.750163   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:55.750213   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:55.780676   70152 cri.go:89] found id: ""
	I0924 19:50:55.780706   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.780717   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:55.780724   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:55.780806   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:55.814742   70152 cri.go:89] found id: ""
	I0924 19:50:55.814771   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.814783   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:55.814790   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:55.814872   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:55.847599   70152 cri.go:89] found id: ""
	I0924 19:50:55.847624   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.847635   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:55.847643   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:55.847708   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:55.882999   70152 cri.go:89] found id: ""
	I0924 19:50:55.883025   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.883034   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:55.883042   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:55.883053   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:55.948795   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:55.948823   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:55.948840   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:56.032946   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:56.032984   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:56.069628   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:56.069657   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:56.118408   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:56.118444   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:53.478043   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:53.979410   69576 pod_ready.go:82] duration metric: took 4m0.007472265s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	E0924 19:50:53.979439   69576 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0924 19:50:53.979449   69576 pod_ready.go:39] duration metric: took 4m5.045187364s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:50:53.979468   69576 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:50:53.979501   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:53.979557   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:54.014613   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:54.014636   69576 cri.go:89] found id: ""
	I0924 19:50:54.014646   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:50:54.014702   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.019232   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:54.019304   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:54.054018   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:54.054042   69576 cri.go:89] found id: ""
	I0924 19:50:54.054050   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:50:54.054111   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.057867   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:54.057937   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:54.090458   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:54.090485   69576 cri.go:89] found id: ""
	I0924 19:50:54.090495   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:50:54.090549   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.094660   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:54.094735   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:54.128438   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:54.128462   69576 cri.go:89] found id: ""
	I0924 19:50:54.128471   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:50:54.128524   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.132209   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:54.132261   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:54.170563   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:54.170584   69576 cri.go:89] found id: ""
	I0924 19:50:54.170591   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:50:54.170640   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.174546   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:54.174615   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:54.211448   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:54.211468   69576 cri.go:89] found id: ""
	I0924 19:50:54.211475   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:50:54.211521   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.215297   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:54.215350   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:54.252930   69576 cri.go:89] found id: ""
	I0924 19:50:54.252955   69576 logs.go:276] 0 containers: []
	W0924 19:50:54.252963   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:54.252969   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:50:54.253023   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:50:54.296111   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:54.296135   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:54.296141   69576 cri.go:89] found id: ""
	I0924 19:50:54.296148   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:50:54.296194   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.299983   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.303864   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:50:54.303899   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:54.340679   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:54.340703   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:54.867298   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:50:54.867333   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:54.908630   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:54.908659   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:54.974028   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:50:54.974059   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:55.034164   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:50:55.034200   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:55.070416   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:50:55.070453   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:55.107831   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:50:55.107857   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:55.143183   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:55.143215   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:55.160049   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:55.160082   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:50:55.267331   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:50:55.267367   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:55.310718   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:50:55.310750   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:55.349628   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:50:55.349656   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:54.990762   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:57.490198   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:56.954742   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:58.955989   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:58.631571   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:58.645369   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:58.645437   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:58.679988   70152 cri.go:89] found id: ""
	I0924 19:50:58.680016   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.680027   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:58.680034   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:58.680095   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:58.717081   70152 cri.go:89] found id: ""
	I0924 19:50:58.717104   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.717114   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:58.717121   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:58.717182   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:58.749093   70152 cri.go:89] found id: ""
	I0924 19:50:58.749115   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.749124   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:58.749129   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:58.749175   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:58.785026   70152 cri.go:89] found id: ""
	I0924 19:50:58.785056   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.785078   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:58.785086   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:58.785174   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:58.821615   70152 cri.go:89] found id: ""
	I0924 19:50:58.821641   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.821651   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:58.821658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:58.821718   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:58.857520   70152 cri.go:89] found id: ""
	I0924 19:50:58.857549   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.857561   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:58.857569   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:58.857638   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:58.892972   70152 cri.go:89] found id: ""
	I0924 19:50:58.892997   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.893008   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:58.893016   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:58.893082   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:58.924716   70152 cri.go:89] found id: ""
	I0924 19:50:58.924743   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.924756   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:58.924764   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:58.924776   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:58.961221   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:58.961249   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:59.013865   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:59.013892   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:59.028436   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:59.028472   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:59.099161   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:59.099187   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:59.099201   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:57.916622   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:57.931591   69576 api_server.go:72] duration metric: took 4m15.73662766s to wait for apiserver process to appear ...
	I0924 19:50:57.931630   69576 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:50:57.931675   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:57.931721   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:57.969570   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:57.969597   69576 cri.go:89] found id: ""
	I0924 19:50:57.969604   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:50:57.969650   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:57.973550   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:57.973602   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:58.015873   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:58.015897   69576 cri.go:89] found id: ""
	I0924 19:50:58.015907   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:50:58.015959   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.020777   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:58.020848   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:58.052771   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:58.052792   69576 cri.go:89] found id: ""
	I0924 19:50:58.052801   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:50:58.052861   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.056640   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:58.056709   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:58.092869   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:58.092888   69576 cri.go:89] found id: ""
	I0924 19:50:58.092894   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:50:58.092949   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.097223   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:58.097293   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:58.131376   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:58.131403   69576 cri.go:89] found id: ""
	I0924 19:50:58.131414   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:50:58.131498   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.135886   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:58.135943   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:58.171962   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:58.171985   69576 cri.go:89] found id: ""
	I0924 19:50:58.171992   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:50:58.172037   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.175714   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:58.175770   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:58.209329   69576 cri.go:89] found id: ""
	I0924 19:50:58.209358   69576 logs.go:276] 0 containers: []
	W0924 19:50:58.209366   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:58.209372   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:50:58.209432   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:50:58.242311   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:58.242331   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:58.242336   69576 cri.go:89] found id: ""
	I0924 19:50:58.242344   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:50:58.242399   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.246774   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.250891   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:58.250909   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:58.736768   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:58.736811   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:50:58.838645   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:50:58.838673   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:58.884334   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:50:58.884366   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:58.933785   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:50:58.933817   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:58.968065   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:50:58.968099   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:59.007212   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:50:59.007238   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:59.067571   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:50:59.067608   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:59.103890   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:50:59.103913   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:59.157991   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:59.158021   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:59.225690   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:59.225724   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:59.239742   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:50:59.239768   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:59.272319   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:50:59.272354   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:01.809089   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:51:01.813972   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0924 19:51:01.815080   69576 api_server.go:141] control plane version: v1.31.1
	I0924 19:51:01.815100   69576 api_server.go:131] duration metric: took 3.883463484s to wait for apiserver health ...
	I0924 19:51:01.815107   69576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:51:01.815127   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:51:01.815166   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:51:01.857140   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:51:01.857164   69576 cri.go:89] found id: ""
	I0924 19:51:01.857174   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:51:01.857235   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.861136   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:51:01.861199   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:51:01.894133   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:51:01.894156   69576 cri.go:89] found id: ""
	I0924 19:51:01.894165   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:51:01.894222   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.898001   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:51:01.898073   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:51:01.933652   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:51:01.933677   69576 cri.go:89] found id: ""
	I0924 19:51:01.933686   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:51:01.933762   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.938487   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:51:01.938549   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:51:01.979500   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:01.979527   69576 cri.go:89] found id: ""
	I0924 19:51:01.979536   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:51:01.979597   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.983762   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:51:01.983827   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:51:02.024402   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:51:02.024427   69576 cri.go:89] found id: ""
	I0924 19:51:02.024436   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:51:02.024501   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.028273   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:51:02.028330   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:51:02.070987   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:51:02.071006   69576 cri.go:89] found id: ""
	I0924 19:51:02.071013   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:51:02.071058   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.076176   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:51:02.076244   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:51:02.119921   69576 cri.go:89] found id: ""
	I0924 19:51:02.119950   69576 logs.go:276] 0 containers: []
	W0924 19:51:02.119960   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:51:02.119967   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:51:02.120026   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:51:02.156531   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:51:02.156562   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:51:02.156568   69576 cri.go:89] found id: ""
	I0924 19:51:02.156577   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:51:02.156643   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.161262   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.165581   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:51:02.165602   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:51:02.216300   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:51:02.216327   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:51:02.262879   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:51:02.262909   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:59.490689   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:01.992004   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:02.984419   69904 pod_ready.go:82] duration metric: took 4m0.00033045s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" ...
	E0924 19:51:02.984461   69904 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 19:51:02.984478   69904 pod_ready.go:39] duration metric: took 4m13.271652912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:02.984508   69904 kubeadm.go:597] duration metric: took 4m21.208228185s to restartPrimaryControlPlane
	W0924 19:51:02.984576   69904 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:02.984610   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:02.643876   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:51:02.643917   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:51:02.680131   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:51:02.680170   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:51:02.693192   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:51:02.693225   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:51:02.788649   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:51:02.788678   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:51:02.836539   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:51:02.836571   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:51:02.889363   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:51:02.889393   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:02.925388   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:51:02.925416   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:51:02.962512   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:51:02.962545   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:51:02.999119   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:51:02.999144   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:51:03.072647   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:51:03.072683   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:51:05.629114   69576 system_pods.go:59] 8 kube-system pods found
	I0924 19:51:05.629141   69576 system_pods.go:61] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running
	I0924 19:51:05.629145   69576 system_pods.go:61] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running
	I0924 19:51:05.629149   69576 system_pods.go:61] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running
	I0924 19:51:05.629153   69576 system_pods.go:61] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running
	I0924 19:51:05.629156   69576 system_pods.go:61] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running
	I0924 19:51:05.629159   69576 system_pods.go:61] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running
	I0924 19:51:05.629164   69576 system_pods.go:61] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:05.629169   69576 system_pods.go:61] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:51:05.629177   69576 system_pods.go:74] duration metric: took 3.814063168s to wait for pod list to return data ...
	I0924 19:51:05.629183   69576 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:51:05.632105   69576 default_sa.go:45] found service account: "default"
	I0924 19:51:05.632126   69576 default_sa.go:55] duration metric: took 2.937635ms for default service account to be created ...
	I0924 19:51:05.632133   69576 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:51:05.637121   69576 system_pods.go:86] 8 kube-system pods found
	I0924 19:51:05.637152   69576 system_pods.go:89] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running
	I0924 19:51:05.637160   69576 system_pods.go:89] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running
	I0924 19:51:05.637167   69576 system_pods.go:89] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running
	I0924 19:51:05.637174   69576 system_pods.go:89] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running
	I0924 19:51:05.637179   69576 system_pods.go:89] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running
	I0924 19:51:05.637185   69576 system_pods.go:89] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running
	I0924 19:51:05.637196   69576 system_pods.go:89] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:05.637205   69576 system_pods.go:89] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:51:05.637214   69576 system_pods.go:126] duration metric: took 5.075319ms to wait for k8s-apps to be running ...
	I0924 19:51:05.637222   69576 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:51:05.637264   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:05.654706   69576 system_svc.go:56] duration metric: took 17.472783ms WaitForService to wait for kubelet
	I0924 19:51:05.654809   69576 kubeadm.go:582] duration metric: took 4m23.459841471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:51:05.654865   69576 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:51:05.658334   69576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:51:05.658353   69576 node_conditions.go:123] node cpu capacity is 2
	I0924 19:51:05.658363   69576 node_conditions.go:105] duration metric: took 3.492035ms to run NodePressure ...
	I0924 19:51:05.658373   69576 start.go:241] waiting for startup goroutines ...
	I0924 19:51:05.658379   69576 start.go:246] waiting for cluster config update ...
	I0924 19:51:05.658389   69576 start.go:255] writing updated cluster config ...
	I0924 19:51:05.658691   69576 ssh_runner.go:195] Run: rm -f paused
	I0924 19:51:05.706059   69576 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:51:05.708303   69576 out.go:177] * Done! kubectl is now configured to use "no-preload-965745" cluster and "default" namespace by default
	I0924 19:51:01.454367   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:03.954114   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:05.955269   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:01.696298   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:01.709055   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:51:01.709132   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:51:01.741383   70152 cri.go:89] found id: ""
	I0924 19:51:01.741409   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.741416   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:51:01.741422   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:51:01.741476   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:51:01.773123   70152 cri.go:89] found id: ""
	I0924 19:51:01.773148   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.773156   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:51:01.773162   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:51:01.773221   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:51:01.806752   70152 cri.go:89] found id: ""
	I0924 19:51:01.806784   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.806792   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:51:01.806798   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:51:01.806928   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:51:01.851739   70152 cri.go:89] found id: ""
	I0924 19:51:01.851769   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.851780   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:51:01.851786   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:51:01.851850   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:51:01.885163   70152 cri.go:89] found id: ""
	I0924 19:51:01.885192   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.885201   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:51:01.885207   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:51:01.885255   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:51:01.918891   70152 cri.go:89] found id: ""
	I0924 19:51:01.918918   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.918929   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:51:01.918936   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:51:01.918996   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:51:01.953367   70152 cri.go:89] found id: ""
	I0924 19:51:01.953394   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.953403   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:51:01.953411   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:51:01.953468   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:51:01.993937   70152 cri.go:89] found id: ""
	I0924 19:51:01.993961   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.993970   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:51:01.993981   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:51:01.993993   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:51:02.049467   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:51:02.049503   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:51:02.065074   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:51:02.065117   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:51:02.141811   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:51:02.141837   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:51:02.141852   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:51:02.224507   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:51:02.224534   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:51:04.766806   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:04.779518   70152 kubeadm.go:597] duration metric: took 4m3.458373s to restartPrimaryControlPlane
	W0924 19:51:04.779588   70152 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:04.779617   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:09.285959   70152 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.506320559s)
	I0924 19:51:09.286033   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:09.299784   70152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:51:09.311238   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:51:09.320580   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:51:09.320603   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:51:09.320658   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:51:09.329216   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:51:09.329281   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:51:09.337964   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:51:09.346324   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:51:09.346383   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:51:09.354788   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:51:09.363191   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:51:09.363249   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:51:09.372141   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:51:09.380290   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:51:09.380344   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:51:09.388996   70152 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:51:09.456034   70152 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:51:09.456144   70152 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:51:09.585473   70152 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:51:09.585697   70152 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:51:09.585935   70152 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:51:09.749623   70152 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:51:09.751504   70152 out.go:235]   - Generating certificates and keys ...
	I0924 19:51:09.751599   70152 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:51:09.751702   70152 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:51:09.751845   70152 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:51:09.751955   70152 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:51:09.752059   70152 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:51:09.752137   70152 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:51:09.752237   70152 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:51:09.752332   70152 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:51:09.752430   70152 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:51:09.752536   70152 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:51:09.752602   70152 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:51:09.752683   70152 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:51:09.881554   70152 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:51:10.269203   70152 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:51:10.518480   70152 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:51:10.712060   70152 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:51:10.727454   70152 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:51:10.728411   70152 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:51:10.728478   70152 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:51:10.849448   70152 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:51:08.454350   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:10.455005   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:10.851100   70152 out.go:235]   - Booting up control plane ...
	I0924 19:51:10.851237   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:51:10.860097   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:51:10.860987   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:51:10.861716   70152 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:51:10.863845   70152 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:51:12.954243   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:14.957843   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:17.453731   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:19.453953   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:21.454522   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:23.455166   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:25.953843   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:29.077330   69904 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.092691625s)
	I0924 19:51:29.077484   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:29.091493   69904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:51:29.101026   69904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:51:29.109749   69904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:51:29.109768   69904 kubeadm.go:157] found existing configuration files:
	
	I0924 19:51:29.109814   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 19:51:29.118177   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:51:29.118225   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:51:29.126963   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 19:51:29.135458   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:51:29.135514   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:51:29.144373   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 19:51:29.153026   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:51:29.153104   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:51:29.162719   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 19:51:29.171667   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:51:29.171722   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:51:29.180370   69904 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:51:29.220747   69904 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 19:51:29.220873   69904 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:51:29.319144   69904 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:51:29.319289   69904 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:51:29.319416   69904 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 19:51:29.328410   69904 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:51:29.329855   69904 out.go:235]   - Generating certificates and keys ...
	I0924 19:51:29.329956   69904 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:51:29.330042   69904 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:51:29.330148   69904 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:51:29.330251   69904 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:51:29.330369   69904 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:51:29.330451   69904 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:51:29.330557   69904 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:51:29.330668   69904 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:51:29.330772   69904 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:51:29.330900   69904 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:51:29.330966   69904 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:51:29.331042   69904 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:51:29.504958   69904 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:51:29.642370   69904 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 19:51:29.735556   69904 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:51:29.870700   69904 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:51:30.048778   69904 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:51:30.049481   69904 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:51:30.052686   69904 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:51:27.954118   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:29.955223   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:30.054684   69904 out.go:235]   - Booting up control plane ...
	I0924 19:51:30.054786   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:51:30.054935   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:51:30.055710   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:51:30.073679   69904 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:51:30.079375   69904 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:51:30.079437   69904 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:51:30.208692   69904 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 19:51:30.208799   69904 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 19:51:31.210485   69904 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001878491s
	I0924 19:51:31.210602   69904 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 19:51:35.712648   69904 kubeadm.go:310] [api-check] The API server is healthy after 4.501942024s
	I0924 19:51:35.726167   69904 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 19:51:35.745115   69904 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 19:51:35.778631   69904 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 19:51:35.778910   69904 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-093771 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 19:51:35.793809   69904 kubeadm.go:310] [bootstrap-token] Using token: joc3du.4csctmt42s6jz0an
	I0924 19:51:31.955402   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:33.956250   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:35.949705   69408 pod_ready.go:82] duration metric: took 4m0.001155579s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" ...
	E0924 19:51:35.949733   69408 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 19:51:35.949755   69408 pod_ready.go:39] duration metric: took 4m8.530526042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:35.949787   69408 kubeadm.go:597] duration metric: took 4m16.768464943s to restartPrimaryControlPlane
	W0924 19:51:35.949874   69408 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:35.949908   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:35.795255   69904 out.go:235]   - Configuring RBAC rules ...
	I0924 19:51:35.795389   69904 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 19:51:35.800809   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 19:51:35.819531   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 19:51:35.825453   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 19:51:35.831439   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 19:51:35.835651   69904 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 19:51:36.119903   69904 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 19:51:36.554891   69904 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 19:51:37.120103   69904 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 19:51:37.121012   69904 kubeadm.go:310] 
	I0924 19:51:37.121125   69904 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 19:51:37.121146   69904 kubeadm.go:310] 
	I0924 19:51:37.121242   69904 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 19:51:37.121260   69904 kubeadm.go:310] 
	I0924 19:51:37.121309   69904 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 19:51:37.121403   69904 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 19:51:37.121469   69904 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 19:51:37.121477   69904 kubeadm.go:310] 
	I0924 19:51:37.121557   69904 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 19:51:37.121578   69904 kubeadm.go:310] 
	I0924 19:51:37.121659   69904 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 19:51:37.121674   69904 kubeadm.go:310] 
	I0924 19:51:37.121765   69904 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 19:51:37.121891   69904 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 19:51:37.122002   69904 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 19:51:37.122013   69904 kubeadm.go:310] 
	I0924 19:51:37.122122   69904 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 19:51:37.122239   69904 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 19:51:37.122247   69904 kubeadm.go:310] 
	I0924 19:51:37.122333   69904 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token joc3du.4csctmt42s6jz0an \
	I0924 19:51:37.122470   69904 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 19:51:37.122509   69904 kubeadm.go:310] 	--control-plane 
	I0924 19:51:37.122520   69904 kubeadm.go:310] 
	I0924 19:51:37.122598   69904 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 19:51:37.122606   69904 kubeadm.go:310] 
	I0924 19:51:37.122720   69904 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token joc3du.4csctmt42s6jz0an \
	I0924 19:51:37.122884   69904 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 19:51:37.124443   69904 kubeadm.go:310] W0924 19:51:29.206815    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:51:37.124730   69904 kubeadm.go:310] W0924 19:51:29.207506    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:51:37.124872   69904 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:51:37.124908   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:51:37.124921   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:51:37.126897   69904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:51:37.128457   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:51:37.138516   69904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:51:37.154747   69904 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:51:37.154812   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:37.154860   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-093771 minikube.k8s.io/updated_at=2024_09_24T19_51_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=default-k8s-diff-port-093771 minikube.k8s.io/primary=true
	I0924 19:51:37.178892   69904 ops.go:34] apiserver oom_adj: -16
	I0924 19:51:37.364019   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:37.864960   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:38.364223   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:38.864189   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:39.365144   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:39.864326   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:40.364143   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:40.864333   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:41.364236   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:41.461496   69904 kubeadm.go:1113] duration metric: took 4.30674912s to wait for elevateKubeSystemPrivileges
	I0924 19:51:41.461536   69904 kubeadm.go:394] duration metric: took 4m59.728895745s to StartCluster
	I0924 19:51:41.461557   69904 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:51:41.461654   69904 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:51:41.464153   69904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:51:41.464416   69904 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:51:41.464620   69904 config.go:182] Loaded profile config "default-k8s-diff-port-093771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:51:41.464553   69904 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:51:41.464699   69904 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464718   69904 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-093771"
	I0924 19:51:41.464722   69904 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464753   69904 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-093771"
	I0924 19:51:41.464753   69904 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464774   69904 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-093771"
	W0924 19:51:41.464786   69904 addons.go:243] addon metrics-server should already be in state true
	I0924 19:51:41.464824   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	W0924 19:51:41.464729   69904 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:51:41.464894   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	I0924 19:51:41.465192   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465211   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465211   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465242   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.465280   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.465229   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.466016   69904 out.go:177] * Verifying Kubernetes components...
	I0924 19:51:41.467370   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:51:41.480937   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0924 19:51:41.481105   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46867
	I0924 19:51:41.481377   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.481596   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.482008   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.482032   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.482119   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.482139   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.482420   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.482453   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.482636   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.483038   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.483079   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.484535   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
	I0924 19:51:41.486427   69904 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-093771"
	W0924 19:51:41.486572   69904 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:51:41.486612   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	I0924 19:51:41.486941   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.487097   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.487145   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.487517   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.487536   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.487866   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.488447   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.488493   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.502934   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0924 19:51:41.503244   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45593
	I0924 19:51:41.503446   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.503810   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.503904   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.503920   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.504266   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.504281   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.504327   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.504742   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.504768   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.505104   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.505295   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.508446   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I0924 19:51:41.508449   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.508839   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.509365   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.509388   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.509739   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.509898   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.510390   69904 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:51:41.511622   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.511801   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:51:41.511818   69904 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:51:41.511838   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.513430   69904 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:51:41.514819   69904 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:51:41.514853   69904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:51:41.514871   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.515131   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.515838   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.515903   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.515983   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.516096   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.516270   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.516423   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.518636   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.519167   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.519192   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.519477   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.519709   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.519885   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.520037   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.522168   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0924 19:51:41.522719   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.523336   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.523360   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.523663   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.523857   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.525469   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.525702   69904 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:51:41.525718   69904 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:51:41.525738   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.528613   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.529122   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.529142   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.529384   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.529572   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.529764   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.529913   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.666584   69904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:51:41.685485   69904 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-093771" to be "Ready" ...
	I0924 19:51:41.701712   69904 node_ready.go:49] node "default-k8s-diff-port-093771" has status "Ready":"True"
	I0924 19:51:41.701735   69904 node_ready.go:38] duration metric: took 16.218729ms for node "default-k8s-diff-port-093771" to be "Ready" ...
	I0924 19:51:41.701745   69904 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:41.732271   69904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:41.759846   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:51:41.850208   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:51:41.854353   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:51:41.854372   69904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:51:41.884080   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:51:41.884109   69904 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:51:41.924130   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:51:41.924161   69904 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:51:41.956667   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.956699   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.957030   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.957044   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.957051   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.957058   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.957319   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.957378   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.957353   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:41.964614   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.964632   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.964934   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.964953   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.988158   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:51:42.871520   69904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.021277105s)
	I0924 19:51:42.871575   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:42.871586   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:42.871871   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:42.871892   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:42.871905   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:42.871918   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:42.872184   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:42.872237   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:42.872259   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:43.106973   69904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.118760493s)
	I0924 19:51:43.107032   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:43.107047   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:43.107342   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:43.107375   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:43.107389   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:43.107403   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:43.107414   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:43.107682   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:43.107697   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:43.107715   69904 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-093771"
	I0924 19:51:43.109818   69904 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0924 19:51:43.111542   69904 addons.go:510] duration metric: took 1.646997004s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0924 19:51:43.738989   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:45.738584   69904 pod_ready.go:93] pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:45.738610   69904 pod_ready.go:82] duration metric: took 4.006305736s for pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:45.738622   69904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:47.746429   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:50.864744   70152 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:51:50.865098   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:51:50.865318   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:51:50.245581   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:51.745840   69904 pod_ready.go:93] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.745870   69904 pod_ready.go:82] duration metric: took 6.007240203s for pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.745888   69904 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.754529   69904 pod_ready.go:93] pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.754556   69904 pod_ready.go:82] duration metric: took 8.660403ms for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.754569   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.764561   69904 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.764589   69904 pod_ready.go:82] duration metric: took 10.010012ms for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.764603   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.771177   69904 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.771205   69904 pod_ready.go:82] duration metric: took 6.593267ms for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.771218   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5rw7b" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.775929   69904 pod_ready.go:93] pod "kube-proxy-5rw7b" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.775952   69904 pod_ready.go:82] duration metric: took 4.726185ms for pod "kube-proxy-5rw7b" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.775964   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:52.143343   69904 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:52.143367   69904 pod_ready.go:82] duration metric: took 367.395759ms for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:52.143375   69904 pod_ready.go:39] duration metric: took 10.441621626s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:52.143388   69904 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:51:52.143433   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:52.157316   69904 api_server.go:72] duration metric: took 10.69286406s to wait for apiserver process to appear ...
	I0924 19:51:52.157344   69904 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:51:52.157363   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:51:52.162550   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 200:
	ok
	I0924 19:51:52.163431   69904 api_server.go:141] control plane version: v1.31.1
	I0924 19:51:52.163453   69904 api_server.go:131] duration metric: took 6.102223ms to wait for apiserver health ...
	I0924 19:51:52.163465   69904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:51:52.346998   69904 system_pods.go:59] 9 kube-system pods found
	I0924 19:51:52.347026   69904 system_pods.go:61] "coredns-7c65d6cfc9-87t62" [b4be73eb-defb-4cc1-84f7-d34dccab4a2c] Running
	I0924 19:51:52.347031   69904 system_pods.go:61] "coredns-7c65d6cfc9-nzssp" [ecf276cd-9aa0-4a0b-81b6-da38271d10ed] Running
	I0924 19:51:52.347036   69904 system_pods.go:61] "etcd-default-k8s-diff-port-093771" [809f2c90-7cfc-4c77-a078-7883a7c6f2ac] Running
	I0924 19:51:52.347039   69904 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-093771" [2d297125-52bd-4c17-ab57-89911bb046e7] Running
	I0924 19:51:52.347043   69904 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-093771" [9e3c3d16-5e5d-4ebf-9ade-24cb40b9e836] Running
	I0924 19:51:52.347046   69904 system_pods.go:61] "kube-proxy-5rw7b" [f2916b6c-1a6f-4766-8543-0d846f559710] Running
	I0924 19:51:52.347049   69904 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-093771" [d1db09ad-d2e9-4453-b354-379bbb4081bf] Running
	I0924 19:51:52.347055   69904 system_pods.go:61] "metrics-server-6867b74b74-gnlkd" [a3b6c4f7-47e1-48a3-adff-1690db5cea3b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:52.347059   69904 system_pods.go:61] "storage-provisioner" [591605b2-de7e-4dc1-903b-f8102ccc3770] Running
	I0924 19:51:52.347067   69904 system_pods.go:74] duration metric: took 183.595946ms to wait for pod list to return data ...
	I0924 19:51:52.347074   69904 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:51:52.542476   69904 default_sa.go:45] found service account: "default"
	I0924 19:51:52.542504   69904 default_sa.go:55] duration metric: took 195.421838ms for default service account to be created ...
	I0924 19:51:52.542514   69904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:51:52.747902   69904 system_pods.go:86] 9 kube-system pods found
	I0924 19:51:52.747936   69904 system_pods.go:89] "coredns-7c65d6cfc9-87t62" [b4be73eb-defb-4cc1-84f7-d34dccab4a2c] Running
	I0924 19:51:52.747943   69904 system_pods.go:89] "coredns-7c65d6cfc9-nzssp" [ecf276cd-9aa0-4a0b-81b6-da38271d10ed] Running
	I0924 19:51:52.747950   69904 system_pods.go:89] "etcd-default-k8s-diff-port-093771" [809f2c90-7cfc-4c77-a078-7883a7c6f2ac] Running
	I0924 19:51:52.747955   69904 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-093771" [2d297125-52bd-4c17-ab57-89911bb046e7] Running
	I0924 19:51:52.747961   69904 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-093771" [9e3c3d16-5e5d-4ebf-9ade-24cb40b9e836] Running
	I0924 19:51:52.747966   69904 system_pods.go:89] "kube-proxy-5rw7b" [f2916b6c-1a6f-4766-8543-0d846f559710] Running
	I0924 19:51:52.747971   69904 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-093771" [d1db09ad-d2e9-4453-b354-379bbb4081bf] Running
	I0924 19:51:52.747981   69904 system_pods.go:89] "metrics-server-6867b74b74-gnlkd" [a3b6c4f7-47e1-48a3-adff-1690db5cea3b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:52.747988   69904 system_pods.go:89] "storage-provisioner" [591605b2-de7e-4dc1-903b-f8102ccc3770] Running
	I0924 19:51:52.748002   69904 system_pods.go:126] duration metric: took 205.481542ms to wait for k8s-apps to be running ...
	I0924 19:51:52.748010   69904 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:51:52.748069   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:52.763092   69904 system_svc.go:56] duration metric: took 15.071727ms WaitForService to wait for kubelet
	I0924 19:51:52.763121   69904 kubeadm.go:582] duration metric: took 11.298674484s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:51:52.763141   69904 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:51:52.942890   69904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:51:52.942915   69904 node_conditions.go:123] node cpu capacity is 2
	I0924 19:51:52.942925   69904 node_conditions.go:105] duration metric: took 179.779826ms to run NodePressure ...
	I0924 19:51:52.942935   69904 start.go:241] waiting for startup goroutines ...
	I0924 19:51:52.942941   69904 start.go:246] waiting for cluster config update ...
	I0924 19:51:52.942951   69904 start.go:255] writing updated cluster config ...
	I0924 19:51:52.943201   69904 ssh_runner.go:195] Run: rm -f paused
	I0924 19:51:52.992952   69904 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:51:52.995076   69904 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-093771" cluster and "default" namespace by default
	I0924 19:51:55.865870   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:51:55.866074   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:52:02.110619   69408 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.160686078s)
	I0924 19:52:02.110702   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:52:02.124706   69408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:52:02.133983   69408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:52:02.142956   69408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:52:02.142980   69408 kubeadm.go:157] found existing configuration files:
	
	I0924 19:52:02.143027   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:52:02.151037   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:52:02.151101   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:52:02.160469   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:52:02.168827   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:52:02.168889   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:52:02.177644   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:52:02.186999   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:52:02.187064   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:52:02.195935   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:52:02.204688   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:52:02.204763   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:52:02.213564   69408 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:52:02.259426   69408 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 19:52:02.259587   69408 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:52:02.355605   69408 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:52:02.355774   69408 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:52:02.355928   69408 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 19:52:02.363355   69408 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:52:02.365307   69408 out.go:235]   - Generating certificates and keys ...
	I0924 19:52:02.365423   69408 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:52:02.365526   69408 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:52:02.365688   69408 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:52:02.365773   69408 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:52:02.365879   69408 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:52:02.365955   69408 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:52:02.366061   69408 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:52:02.366149   69408 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:52:02.366257   69408 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:52:02.366362   69408 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:52:02.366417   69408 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:52:02.366502   69408 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:52:02.551857   69408 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:52:02.836819   69408 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 19:52:03.096479   69408 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:52:03.209489   69408 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:52:03.274701   69408 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:52:03.275214   69408 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:52:03.277917   69408 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:52:03.279804   69408 out.go:235]   - Booting up control plane ...
	I0924 19:52:03.279909   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:52:03.280022   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:52:03.280130   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:52:03.297451   69408 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:52:03.304789   69408 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:52:03.304840   69408 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:52:03.423280   69408 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 19:52:03.423394   69408 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 19:52:03.925128   69408 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.985266ms
	I0924 19:52:03.925262   69408 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 19:52:05.866171   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:52:05.866441   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:52:08.429070   69408 kubeadm.go:310] [api-check] The API server is healthy after 4.502084393s
	I0924 19:52:08.439108   69408 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 19:52:08.455261   69408 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 19:52:08.479883   69408 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 19:52:08.480145   69408 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-311319 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 19:52:08.490294   69408 kubeadm.go:310] [bootstrap-token] Using token: ugx0qk.6i7lm67tfu0foozy
	I0924 19:52:08.491600   69408 out.go:235]   - Configuring RBAC rules ...
	I0924 19:52:08.491741   69408 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 19:52:08.496142   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 19:52:08.502704   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 19:52:08.508752   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 19:52:08.512088   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 19:52:08.515855   69408 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 19:52:08.837286   69408 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 19:52:09.278937   69408 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 19:52:09.835442   69408 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 19:52:09.836889   69408 kubeadm.go:310] 
	I0924 19:52:09.836953   69408 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 19:52:09.836967   69408 kubeadm.go:310] 
	I0924 19:52:09.837040   69408 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 19:52:09.837048   69408 kubeadm.go:310] 
	I0924 19:52:09.837068   69408 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 19:52:09.837117   69408 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 19:52:09.837167   69408 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 19:52:09.837174   69408 kubeadm.go:310] 
	I0924 19:52:09.837238   69408 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 19:52:09.837246   69408 kubeadm.go:310] 
	I0924 19:52:09.837297   69408 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 19:52:09.837307   69408 kubeadm.go:310] 
	I0924 19:52:09.837371   69408 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 19:52:09.837490   69408 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 19:52:09.837611   69408 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 19:52:09.837630   69408 kubeadm.go:310] 
	I0924 19:52:09.837706   69408 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 19:52:09.837774   69408 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 19:52:09.837780   69408 kubeadm.go:310] 
	I0924 19:52:09.837851   69408 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ugx0qk.6i7lm67tfu0foozy \
	I0924 19:52:09.837951   69408 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 19:52:09.837979   69408 kubeadm.go:310] 	--control-plane 
	I0924 19:52:09.837992   69408 kubeadm.go:310] 
	I0924 19:52:09.838087   69408 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 19:52:09.838100   69408 kubeadm.go:310] 
	I0924 19:52:09.838204   69408 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ugx0qk.6i7lm67tfu0foozy \
	I0924 19:52:09.838325   69408 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 19:52:09.839629   69408 kubeadm.go:310] W0924 19:52:02.243473    2529 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:52:09.839919   69408 kubeadm.go:310] W0924 19:52:02.244730    2529 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:52:09.840040   69408 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:52:09.840056   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:52:09.840067   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:52:09.842039   69408 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:52:09.843562   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:52:09.855620   69408 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:52:09.873291   69408 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:52:09.873381   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:09.873401   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-311319 minikube.k8s.io/updated_at=2024_09_24T19_52_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=embed-certs-311319 minikube.k8s.io/primary=true
	I0924 19:52:09.898351   69408 ops.go:34] apiserver oom_adj: -16
	I0924 19:52:10.043641   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:10.544445   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:11.043725   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:11.543862   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:12.043769   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:12.543723   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:13.044577   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:13.544545   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.043885   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.544454   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.663140   69408 kubeadm.go:1113] duration metric: took 4.789827964s to wait for elevateKubeSystemPrivileges
	I0924 19:52:14.663181   69408 kubeadm.go:394] duration metric: took 4m55.527467072s to StartCluster
	I0924 19:52:14.663202   69408 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:52:14.663295   69408 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:52:14.665852   69408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:52:14.666123   69408 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:52:14.666181   69408 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:52:14.666281   69408 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-311319"
	I0924 19:52:14.666302   69408 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-311319"
	I0924 19:52:14.666298   69408 addons.go:69] Setting default-storageclass=true in profile "embed-certs-311319"
	W0924 19:52:14.666315   69408 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:52:14.666324   69408 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-311319"
	I0924 19:52:14.666347   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.666357   69408 config.go:182] Loaded profile config "embed-certs-311319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:52:14.666407   69408 addons.go:69] Setting metrics-server=true in profile "embed-certs-311319"
	I0924 19:52:14.666424   69408 addons.go:234] Setting addon metrics-server=true in "embed-certs-311319"
	W0924 19:52:14.666432   69408 addons.go:243] addon metrics-server should already be in state true
	I0924 19:52:14.666462   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.666762   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666766   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666803   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.666863   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.666899   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666937   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.667748   69408 out.go:177] * Verifying Kubernetes components...
	I0924 19:52:14.669166   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:52:14.684612   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39209
	I0924 19:52:14.684876   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0924 19:52:14.685146   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.685266   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.685645   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.685662   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.685689   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35475
	I0924 19:52:14.685786   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.685806   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.686027   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.686034   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.686125   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.686517   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.686559   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.686617   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.686617   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.686638   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.686666   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.687118   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.687348   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.690029   69408 addons.go:234] Setting addon default-storageclass=true in "embed-certs-311319"
	W0924 19:52:14.690047   69408 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:52:14.690067   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.690357   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.690389   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.705119   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0924 19:52:14.705473   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42153
	I0924 19:52:14.705613   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.705983   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.706260   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.706283   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.706433   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.706458   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.706673   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.706793   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.706937   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.706989   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.708118   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I0924 19:52:14.708552   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.708751   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.709269   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.709288   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.709312   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.709894   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.710364   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.710405   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.710737   69408 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:52:14.710846   69408 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:52:14.711925   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:52:14.711937   69408 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:52:14.711951   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.712493   69408 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:52:14.712506   69408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:52:14.712521   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.716365   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716390   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.716402   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716511   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.716639   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.716738   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.716763   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716820   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.717468   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.717490   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.717691   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.717856   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.718038   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.718356   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.729081   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43771
	I0924 19:52:14.729516   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.730022   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.730040   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.730363   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.730541   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.732272   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.732526   69408 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:52:14.732545   69408 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:52:14.732564   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.735618   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.736196   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.736220   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.736269   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.736499   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.736675   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.736823   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.869932   69408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:52:14.906644   69408 node_ready.go:35] waiting up to 6m0s for node "embed-certs-311319" to be "Ready" ...
	I0924 19:52:14.914856   69408 node_ready.go:49] node "embed-certs-311319" has status "Ready":"True"
	I0924 19:52:14.914884   69408 node_ready.go:38] duration metric: took 8.205314ms for node "embed-certs-311319" to be "Ready" ...
	I0924 19:52:14.914893   69408 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:52:14.919969   69408 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:15.014078   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:52:15.014101   69408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:52:15.052737   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:52:15.064467   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:52:15.065858   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:52:15.065877   69408 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:52:15.137882   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:52:15.137902   69408 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:52:15.222147   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:52:15.331245   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.331279   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.331622   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.331647   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.331656   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.331664   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.331624   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:15.331894   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.331910   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.331898   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:15.339921   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.339937   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.340159   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.340203   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.340235   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.048748   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.048769   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.049094   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.049133   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.049144   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.049152   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.049159   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.049489   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.049524   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.049544   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.149500   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.149522   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.149817   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.149877   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.149903   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.149917   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.149926   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.150145   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.150159   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.150182   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.150191   69408 addons.go:475] Verifying addon metrics-server=true in "embed-certs-311319"
	I0924 19:52:16.151648   69408 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0924 19:52:16.153171   69408 addons.go:510] duration metric: took 1.486993032s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0924 19:52:16.925437   69408 pod_ready.go:103] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"False"
	I0924 19:52:18.926343   69408 pod_ready.go:103] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"False"
	I0924 19:52:20.928047   69408 pod_ready.go:93] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.928068   69408 pod_ready.go:82] duration metric: took 6.008077725s for pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.928076   69408 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.933100   69408 pod_ready.go:93] pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.933119   69408 pod_ready.go:82] duration metric: took 5.035858ms for pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.933127   69408 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.938200   69408 pod_ready.go:93] pod "etcd-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.938215   69408 pod_ready.go:82] duration metric: took 5.082837ms for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.938223   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.942124   69408 pod_ready.go:93] pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.942143   69408 pod_ready.go:82] duration metric: took 3.912415ms for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.942154   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.946306   69408 pod_ready.go:93] pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.946323   69408 pod_ready.go:82] duration metric: took 4.162782ms for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.946330   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h42s7" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.323768   69408 pod_ready.go:93] pod "kube-proxy-h42s7" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:21.323794   69408 pod_ready.go:82] duration metric: took 377.456852ms for pod "kube-proxy-h42s7" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.323806   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.723714   69408 pod_ready.go:93] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:21.723742   69408 pod_ready.go:82] duration metric: took 399.928048ms for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.723752   69408 pod_ready.go:39] duration metric: took 6.808848583s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:52:21.723769   69408 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:52:21.723835   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:52:21.738273   69408 api_server.go:72] duration metric: took 7.072120167s to wait for apiserver process to appear ...
	I0924 19:52:21.738301   69408 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:52:21.738353   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:52:21.743391   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 200:
	ok
	I0924 19:52:21.744346   69408 api_server.go:141] control plane version: v1.31.1
	I0924 19:52:21.744361   69408 api_server.go:131] duration metric: took 6.053884ms to wait for apiserver health ...
	I0924 19:52:21.744368   69408 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:52:21.926453   69408 system_pods.go:59] 9 kube-system pods found
	I0924 19:52:21.926485   69408 system_pods.go:61] "coredns-7c65d6cfc9-jsvdk" [da741136-c1ce-436f-9df0-e447b067265f] Running
	I0924 19:52:21.926493   69408 system_pods.go:61] "coredns-7c65d6cfc9-qgfvt" [7e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74] Running
	I0924 19:52:21.926499   69408 system_pods.go:61] "etcd-embed-certs-311319" [543c64c6-453b-4d42-b6a8-5b25577b3b8a] Running
	I0924 19:52:21.926505   69408 system_pods.go:61] "kube-apiserver-embed-certs-311319" [c1cd4c65-07a6-4d53-8f1d-438a8efdcdfa] Running
	I0924 19:52:21.926510   69408 system_pods.go:61] "kube-controller-manager-embed-certs-311319" [eece1531-5f24-4853-9e91-ca29558f3b9d] Running
	I0924 19:52:21.926517   69408 system_pods.go:61] "kube-proxy-h42s7" [76930a49-6a8a-4d02-84b8-8e26f3196ac3] Running
	I0924 19:52:21.926522   69408 system_pods.go:61] "kube-scheduler-embed-certs-311319" [22d20361-552d-4443-bec2-e406919d2966] Running
	I0924 19:52:21.926531   69408 system_pods.go:61] "metrics-server-6867b74b74-xnwm4" [dc64f26b-e4a6-4692-83d5-e6c794c1b130] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:52:21.926540   69408 system_pods.go:61] "storage-provisioner" [766bdfe2-684a-47de-94fd-088795b60e2b] Running
	I0924 19:52:21.926551   69408 system_pods.go:74] duration metric: took 182.176397ms to wait for pod list to return data ...
	I0924 19:52:21.926562   69408 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:52:22.123871   69408 default_sa.go:45] found service account: "default"
	I0924 19:52:22.123896   69408 default_sa.go:55] duration metric: took 197.328478ms for default service account to be created ...
	I0924 19:52:22.123911   69408 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:52:22.327585   69408 system_pods.go:86] 9 kube-system pods found
	I0924 19:52:22.327616   69408 system_pods.go:89] "coredns-7c65d6cfc9-jsvdk" [da741136-c1ce-436f-9df0-e447b067265f] Running
	I0924 19:52:22.327625   69408 system_pods.go:89] "coredns-7c65d6cfc9-qgfvt" [7e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74] Running
	I0924 19:52:22.327630   69408 system_pods.go:89] "etcd-embed-certs-311319" [543c64c6-453b-4d42-b6a8-5b25577b3b8a] Running
	I0924 19:52:22.327636   69408 system_pods.go:89] "kube-apiserver-embed-certs-311319" [c1cd4c65-07a6-4d53-8f1d-438a8efdcdfa] Running
	I0924 19:52:22.327641   69408 system_pods.go:89] "kube-controller-manager-embed-certs-311319" [eece1531-5f24-4853-9e91-ca29558f3b9d] Running
	I0924 19:52:22.327647   69408 system_pods.go:89] "kube-proxy-h42s7" [76930a49-6a8a-4d02-84b8-8e26f3196ac3] Running
	I0924 19:52:22.327652   69408 system_pods.go:89] "kube-scheduler-embed-certs-311319" [22d20361-552d-4443-bec2-e406919d2966] Running
	I0924 19:52:22.327662   69408 system_pods.go:89] "metrics-server-6867b74b74-xnwm4" [dc64f26b-e4a6-4692-83d5-e6c794c1b130] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:52:22.327671   69408 system_pods.go:89] "storage-provisioner" [766bdfe2-684a-47de-94fd-088795b60e2b] Running
	I0924 19:52:22.327680   69408 system_pods.go:126] duration metric: took 203.762675ms to wait for k8s-apps to be running ...
	I0924 19:52:22.327687   69408 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:52:22.327741   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:52:22.340873   69408 system_svc.go:56] duration metric: took 13.177605ms WaitForService to wait for kubelet
	I0924 19:52:22.340903   69408 kubeadm.go:582] duration metric: took 7.674755249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:52:22.340925   69408 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:52:22.524647   69408 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:52:22.524670   69408 node_conditions.go:123] node cpu capacity is 2
	I0924 19:52:22.524679   69408 node_conditions.go:105] duration metric: took 183.74973ms to run NodePressure ...
	I0924 19:52:22.524688   69408 start.go:241] waiting for startup goroutines ...
	I0924 19:52:22.524695   69408 start.go:246] waiting for cluster config update ...
	I0924 19:52:22.524705   69408 start.go:255] writing updated cluster config ...
	I0924 19:52:22.524994   69408 ssh_runner.go:195] Run: rm -f paused
	I0924 19:52:22.571765   69408 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:52:22.574724   69408 out.go:177] * Done! kubectl is now configured to use "embed-certs-311319" cluster and "default" namespace by default
	I0924 19:52:25.866986   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:52:25.867227   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:05.868563   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:05.868798   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:05.868811   70152 kubeadm.go:310] 
	I0924 19:53:05.868866   70152 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:53:05.868927   70152 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:53:05.868936   70152 kubeadm.go:310] 
	I0924 19:53:05.868989   70152 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:53:05.869037   70152 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:53:05.869201   70152 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:53:05.869212   70152 kubeadm.go:310] 
	I0924 19:53:05.869332   70152 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:53:05.869380   70152 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:53:05.869433   70152 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:53:05.869442   70152 kubeadm.go:310] 
	I0924 19:53:05.869555   70152 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:53:05.869664   70152 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:53:05.869674   70152 kubeadm.go:310] 
	I0924 19:53:05.869792   70152 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:53:05.869900   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:53:05.870003   70152 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:53:05.870132   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:53:05.870172   70152 kubeadm.go:310] 
	I0924 19:53:05.870425   70152 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:53:05.870536   70152 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:53:05.870658   70152 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 19:53:05.870869   70152 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 19:53:05.870918   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:53:06.301673   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:53:06.316103   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:53:06.326362   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:53:06.326396   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:53:06.326454   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:53:06.334687   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:53:06.334744   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:53:06.344175   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:53:06.352663   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:53:06.352725   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:53:06.361955   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:53:06.370584   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:53:06.370625   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:53:06.379590   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:53:06.388768   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:53:06.388825   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:53:06.397242   70152 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:53:06.469463   70152 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:53:06.469547   70152 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:53:06.606743   70152 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:53:06.606900   70152 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:53:06.607021   70152 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:53:06.778104   70152 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:53:06.780036   70152 out.go:235]   - Generating certificates and keys ...
	I0924 19:53:06.780148   70152 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:53:06.780241   70152 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:53:06.780359   70152 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:53:06.780451   70152 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:53:06.780578   70152 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:53:06.780654   70152 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:53:06.780753   70152 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:53:06.780852   70152 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:53:06.780972   70152 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:53:06.781119   70152 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:53:06.781178   70152 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:53:06.781254   70152 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:53:06.836315   70152 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:53:06.938657   70152 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:53:07.273070   70152 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:53:07.347309   70152 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:53:07.369112   70152 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:53:07.369777   70152 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:53:07.369866   70152 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:53:07.504122   70152 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:53:07.506006   70152 out.go:235]   - Booting up control plane ...
	I0924 19:53:07.506117   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:53:07.509132   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:53:07.509998   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:53:07.510662   70152 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:53:07.513856   70152 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:53:47.515377   70152 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:53:47.515684   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:47.515976   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:52.516646   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:52.516842   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:54:02.517539   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:54:02.517808   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:54:22.518364   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:54:22.518605   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:55:02.517378   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:55:02.517642   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:55:02.517672   70152 kubeadm.go:310] 
	I0924 19:55:02.517732   70152 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:55:02.517791   70152 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:55:02.517802   70152 kubeadm.go:310] 
	I0924 19:55:02.517880   70152 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:55:02.517943   70152 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:55:02.518090   70152 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:55:02.518102   70152 kubeadm.go:310] 
	I0924 19:55:02.518239   70152 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:55:02.518289   70152 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:55:02.518347   70152 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:55:02.518358   70152 kubeadm.go:310] 
	I0924 19:55:02.518488   70152 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:55:02.518565   70152 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:55:02.518572   70152 kubeadm.go:310] 
	I0924 19:55:02.518685   70152 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:55:02.518768   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:55:02.518891   70152 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:55:02.518991   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:55:02.519010   70152 kubeadm.go:310] 
	I0924 19:55:02.519626   70152 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:55:02.519745   70152 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:55:02.519839   70152 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 19:55:02.519914   70152 kubeadm.go:394] duration metric: took 8m1.249852968s to StartCluster
	I0924 19:55:02.519952   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:55:02.520008   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:55:02.552844   70152 cri.go:89] found id: ""
	I0924 19:55:02.552880   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.552891   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:55:02.552899   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:55:02.552956   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:55:02.582811   70152 cri.go:89] found id: ""
	I0924 19:55:02.582858   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.582869   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:55:02.582876   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:55:02.582929   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:55:02.614815   70152 cri.go:89] found id: ""
	I0924 19:55:02.614858   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.614868   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:55:02.614874   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:55:02.614920   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:55:02.644953   70152 cri.go:89] found id: ""
	I0924 19:55:02.644982   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.644991   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:55:02.644998   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:55:02.645053   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:55:02.680419   70152 cri.go:89] found id: ""
	I0924 19:55:02.680448   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.680458   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:55:02.680466   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:55:02.680525   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:55:02.713021   70152 cri.go:89] found id: ""
	I0924 19:55:02.713043   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.713051   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:55:02.713057   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:55:02.713118   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:55:02.748326   70152 cri.go:89] found id: ""
	I0924 19:55:02.748350   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.748358   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:55:02.748364   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:55:02.748416   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:55:02.780489   70152 cri.go:89] found id: ""
	I0924 19:55:02.780523   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.780546   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:55:02.780558   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:55:02.780572   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:55:02.830514   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:55:02.830550   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:55:02.845321   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:55:02.845349   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:55:02.909352   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:55:02.909383   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:55:02.909399   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:55:03.033937   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:55:03.033972   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0924 19:55:03.070531   70152 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 19:55:03.070611   70152 out.go:270] * 
	W0924 19:55:03.070682   70152 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:55:03.070701   70152 out.go:270] * 
	W0924 19:55:03.071559   70152 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 19:55:03.074921   70152 out.go:201] 
	W0924 19:55:03.076106   70152 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:55:03.076150   70152 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 19:55:03.076180   70152 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 19:55:03.077787   70152 out.go:201] 
	
	
	==> CRI-O <==
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.567793298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208084567745977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a700dc0-3ed0-4f10-ac7e-d3e5ddc9ca0d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.568293350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbf91bc3-6add-476e-96fb-9cb8afa13085 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.568371082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbf91bc3-6add-476e-96fb-9cb8afa13085 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.568553976Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34839ea54a6890ea675261ba9af3170d6b99038780d665abd04b35d45bb48f6f,PodSandboxId:131d8a27413b9da4c76720aed269dc33dcfe9410d87c9d9d4bf2bb4c6e50cc00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207536445414910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 766bdfe2-684a-47de-94fd-088795b60e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc98dcca72ffe9c57e5273f7a9a8eb9474b1d72b74843c7ef2699e6f61afd48c,PodSandboxId:ce52f165118d0afe9e96c805d3dd46689c87d4b5ae0e6b9d21d876aeb27227dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207535581376568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jsvdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da741136-c1ce-436f-9df0-e447b067265f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc0a601e7e63462ae468d844a5d9ab5ab2503cca90d8aa79e405980da5ab00c4,PodSandboxId:a67c3d12af90eab2d0762d56831c81890781d27639cbb0f17aed5461e4a1bc4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207535547918715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgfvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f63b4a01201f91ef68436ef7b9f220ee8a42a6db590a22d0094a0d0141ca5742,PodSandboxId:ecb272b5bfcdb94d8c8fc935b46a4ed895e341a3f49580cc5abffe1d36e10246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727207534928446558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h42s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76930a49-6a8a-4d02-84b8-8e26f3196ac3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca8e7b367bbab2a86cc8801e29b23fa44bc694d1f19f793edd0abd324d5d8a2,PodSandboxId:db53007d93ee51f1c6c16ac1b340b88ca541318a788211d0d77906a9dfa6a381,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727207524368871412,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad14993d013043ea3331f97542f9eccd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e336118db96c0e647fbb43585d09d58a5059c71eaa9fa6344c8f8cd15176d0,PodSandboxId:3ac76716e50414ae5c84d2ade366173eacd92c085baf40db1a18823eab3df0e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727207524345499296,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0c4dbda08a77c349b709614a89de24,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87a2e960ce814d9d4dfadd3629b9def9a617c612e66946b1b52b7cbe89a75fd,PodSandboxId:bb4de848e5140aaa278941b7d932f4dfae34911454b8fa1384e4eac2510a5071,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727207524299690841,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da5f5d7202ef1fb79e94fc63f1e707e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf5bb4c542ce6d3d86717ddbba9c22cb79ced7647cb77eb559bbc0b8c1584c9,PodSandboxId:3274ea157c618d773cbe8a7578b5ca10beb0adabc1b8954a8650a3bb902234ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207524269116162,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491c336d31724d98308cdabbc6d0100e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13b5e782473d933aff8bdaff83c0fd8fb2b7ba5f825b711df5c127957017d5d,PodSandboxId:1d7f2587996ec807b1c9a448669e61278cd671e026a8e19e571b842cd11e8a3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727207242298281063,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491c336d31724d98308cdabbc6d0100e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbf91bc3-6add-476e-96fb-9cb8afa13085 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.604205565Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=104eb1c6-a23a-49e7-a2bc-783e941104a2 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.604294116Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=104eb1c6-a23a-49e7-a2bc-783e941104a2 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.605451942Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b2c1396-4c7b-45d6-a729-0aa236217300 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.605874098Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208084605848722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b2c1396-4c7b-45d6-a729-0aa236217300 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.606466526Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a7d31e9-ce8a-4ad5-8582-9816ab43d567 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.606540150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a7d31e9-ce8a-4ad5-8582-9816ab43d567 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.606727463Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34839ea54a6890ea675261ba9af3170d6b99038780d665abd04b35d45bb48f6f,PodSandboxId:131d8a27413b9da4c76720aed269dc33dcfe9410d87c9d9d4bf2bb4c6e50cc00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207536445414910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 766bdfe2-684a-47de-94fd-088795b60e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc98dcca72ffe9c57e5273f7a9a8eb9474b1d72b74843c7ef2699e6f61afd48c,PodSandboxId:ce52f165118d0afe9e96c805d3dd46689c87d4b5ae0e6b9d21d876aeb27227dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207535581376568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jsvdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da741136-c1ce-436f-9df0-e447b067265f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc0a601e7e63462ae468d844a5d9ab5ab2503cca90d8aa79e405980da5ab00c4,PodSandboxId:a67c3d12af90eab2d0762d56831c81890781d27639cbb0f17aed5461e4a1bc4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207535547918715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgfvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f63b4a01201f91ef68436ef7b9f220ee8a42a6db590a22d0094a0d0141ca5742,PodSandboxId:ecb272b5bfcdb94d8c8fc935b46a4ed895e341a3f49580cc5abffe1d36e10246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727207534928446558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h42s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76930a49-6a8a-4d02-84b8-8e26f3196ac3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca8e7b367bbab2a86cc8801e29b23fa44bc694d1f19f793edd0abd324d5d8a2,PodSandboxId:db53007d93ee51f1c6c16ac1b340b88ca541318a788211d0d77906a9dfa6a381,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727207524368871412,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad14993d013043ea3331f97542f9eccd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e336118db96c0e647fbb43585d09d58a5059c71eaa9fa6344c8f8cd15176d0,PodSandboxId:3ac76716e50414ae5c84d2ade366173eacd92c085baf40db1a18823eab3df0e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727207524345499296,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0c4dbda08a77c349b709614a89de24,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87a2e960ce814d9d4dfadd3629b9def9a617c612e66946b1b52b7cbe89a75fd,PodSandboxId:bb4de848e5140aaa278941b7d932f4dfae34911454b8fa1384e4eac2510a5071,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727207524299690841,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da5f5d7202ef1fb79e94fc63f1e707e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf5bb4c542ce6d3d86717ddbba9c22cb79ced7647cb77eb559bbc0b8c1584c9,PodSandboxId:3274ea157c618d773cbe8a7578b5ca10beb0adabc1b8954a8650a3bb902234ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207524269116162,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491c336d31724d98308cdabbc6d0100e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13b5e782473d933aff8bdaff83c0fd8fb2b7ba5f825b711df5c127957017d5d,PodSandboxId:1d7f2587996ec807b1c9a448669e61278cd671e026a8e19e571b842cd11e8a3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727207242298281063,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491c336d31724d98308cdabbc6d0100e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a7d31e9-ce8a-4ad5-8582-9816ab43d567 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.652327599Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef1776c1-d6bc-4541-b2f7-7245cdf5e153 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.652431237Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef1776c1-d6bc-4541-b2f7-7245cdf5e153 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.654007402Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=657fdee2-0a26-443e-a131-af2de2aaf3cf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.654474941Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208084654452427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=657fdee2-0a26-443e-a131-af2de2aaf3cf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.654932874Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0224d2c4-505a-4c02-9674-daab0868aedc name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.655106319Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0224d2c4-505a-4c02-9674-daab0868aedc name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.655359047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34839ea54a6890ea675261ba9af3170d6b99038780d665abd04b35d45bb48f6f,PodSandboxId:131d8a27413b9da4c76720aed269dc33dcfe9410d87c9d9d4bf2bb4c6e50cc00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207536445414910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 766bdfe2-684a-47de-94fd-088795b60e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc98dcca72ffe9c57e5273f7a9a8eb9474b1d72b74843c7ef2699e6f61afd48c,PodSandboxId:ce52f165118d0afe9e96c805d3dd46689c87d4b5ae0e6b9d21d876aeb27227dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207535581376568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jsvdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da741136-c1ce-436f-9df0-e447b067265f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc0a601e7e63462ae468d844a5d9ab5ab2503cca90d8aa79e405980da5ab00c4,PodSandboxId:a67c3d12af90eab2d0762d56831c81890781d27639cbb0f17aed5461e4a1bc4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207535547918715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgfvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f63b4a01201f91ef68436ef7b9f220ee8a42a6db590a22d0094a0d0141ca5742,PodSandboxId:ecb272b5bfcdb94d8c8fc935b46a4ed895e341a3f49580cc5abffe1d36e10246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727207534928446558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h42s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76930a49-6a8a-4d02-84b8-8e26f3196ac3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca8e7b367bbab2a86cc8801e29b23fa44bc694d1f19f793edd0abd324d5d8a2,PodSandboxId:db53007d93ee51f1c6c16ac1b340b88ca541318a788211d0d77906a9dfa6a381,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727207524368871412,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad14993d013043ea3331f97542f9eccd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e336118db96c0e647fbb43585d09d58a5059c71eaa9fa6344c8f8cd15176d0,PodSandboxId:3ac76716e50414ae5c84d2ade366173eacd92c085baf40db1a18823eab3df0e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727207524345499296,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0c4dbda08a77c349b709614a89de24,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87a2e960ce814d9d4dfadd3629b9def9a617c612e66946b1b52b7cbe89a75fd,PodSandboxId:bb4de848e5140aaa278941b7d932f4dfae34911454b8fa1384e4eac2510a5071,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727207524299690841,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da5f5d7202ef1fb79e94fc63f1e707e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf5bb4c542ce6d3d86717ddbba9c22cb79ced7647cb77eb559bbc0b8c1584c9,PodSandboxId:3274ea157c618d773cbe8a7578b5ca10beb0adabc1b8954a8650a3bb902234ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207524269116162,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491c336d31724d98308cdabbc6d0100e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13b5e782473d933aff8bdaff83c0fd8fb2b7ba5f825b711df5c127957017d5d,PodSandboxId:1d7f2587996ec807b1c9a448669e61278cd671e026a8e19e571b842cd11e8a3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727207242298281063,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491c336d31724d98308cdabbc6d0100e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0224d2c4-505a-4c02-9674-daab0868aedc name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.686494294Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38afe00a-9fd7-4fef-8bcb-5cdf2fd26cb0 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.686594535Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38afe00a-9fd7-4fef-8bcb-5cdf2fd26cb0 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.687979600Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=28f9a830-89d5-4395-a449-d9919e36086a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.688379929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208084688355469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28f9a830-89d5-4395-a449-d9919e36086a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.689068490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80ee29fe-7fb7-44f7-a85b-5eb10558df28 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.689139328Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80ee29fe-7fb7-44f7-a85b-5eb10558df28 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:01:24 embed-certs-311319 crio[702]: time="2024-09-24 20:01:24.689335153Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34839ea54a6890ea675261ba9af3170d6b99038780d665abd04b35d45bb48f6f,PodSandboxId:131d8a27413b9da4c76720aed269dc33dcfe9410d87c9d9d4bf2bb4c6e50cc00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207536445414910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 766bdfe2-684a-47de-94fd-088795b60e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc98dcca72ffe9c57e5273f7a9a8eb9474b1d72b74843c7ef2699e6f61afd48c,PodSandboxId:ce52f165118d0afe9e96c805d3dd46689c87d4b5ae0e6b9d21d876aeb27227dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207535581376568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jsvdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da741136-c1ce-436f-9df0-e447b067265f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc0a601e7e63462ae468d844a5d9ab5ab2503cca90d8aa79e405980da5ab00c4,PodSandboxId:a67c3d12af90eab2d0762d56831c81890781d27639cbb0f17aed5461e4a1bc4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207535547918715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgfvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f63b4a01201f91ef68436ef7b9f220ee8a42a6db590a22d0094a0d0141ca5742,PodSandboxId:ecb272b5bfcdb94d8c8fc935b46a4ed895e341a3f49580cc5abffe1d36e10246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727207534928446558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h42s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76930a49-6a8a-4d02-84b8-8e26f3196ac3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca8e7b367bbab2a86cc8801e29b23fa44bc694d1f19f793edd0abd324d5d8a2,PodSandboxId:db53007d93ee51f1c6c16ac1b340b88ca541318a788211d0d77906a9dfa6a381,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727207524368871412,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad14993d013043ea3331f97542f9eccd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e336118db96c0e647fbb43585d09d58a5059c71eaa9fa6344c8f8cd15176d0,PodSandboxId:3ac76716e50414ae5c84d2ade366173eacd92c085baf40db1a18823eab3df0e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727207524345499296,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0c4dbda08a77c349b709614a89de24,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87a2e960ce814d9d4dfadd3629b9def9a617c612e66946b1b52b7cbe89a75fd,PodSandboxId:bb4de848e5140aaa278941b7d932f4dfae34911454b8fa1384e4eac2510a5071,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727207524299690841,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da5f5d7202ef1fb79e94fc63f1e707e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf5bb4c542ce6d3d86717ddbba9c22cb79ced7647cb77eb559bbc0b8c1584c9,PodSandboxId:3274ea157c618d773cbe8a7578b5ca10beb0adabc1b8954a8650a3bb902234ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207524269116162,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491c336d31724d98308cdabbc6d0100e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13b5e782473d933aff8bdaff83c0fd8fb2b7ba5f825b711df5c127957017d5d,PodSandboxId:1d7f2587996ec807b1c9a448669e61278cd671e026a8e19e571b842cd11e8a3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727207242298281063,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491c336d31724d98308cdabbc6d0100e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80ee29fe-7fb7-44f7-a85b-5eb10558df28 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	34839ea54a689       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   131d8a27413b9       storage-provisioner
	cc98dcca72ffe       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   ce52f165118d0       coredns-7c65d6cfc9-jsvdk
	dc0a601e7e634       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   a67c3d12af90e       coredns-7c65d6cfc9-qgfvt
	f63b4a01201f9       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   ecb272b5bfcdb       kube-proxy-h42s7
	fca8e7b367bba       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   db53007d93ee5       kube-scheduler-embed-certs-311319
	c9e336118db96       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   3ac76716e5041       etcd-embed-certs-311319
	d87a2e960ce81       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   bb4de848e5140       kube-controller-manager-embed-certs-311319
	ddf5bb4c542ce       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   3274ea157c618       kube-apiserver-embed-certs-311319
	d13b5e782473d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   1d7f2587996ec       kube-apiserver-embed-certs-311319
	
	
	==> coredns [cc98dcca72ffe9c57e5273f7a9a8eb9474b1d72b74843c7ef2699e6f61afd48c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [dc0a601e7e63462ae468d844a5d9ab5ab2503cca90d8aa79e405980da5ab00c4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-311319
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-311319
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=embed-certs-311319
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T19_52_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 19:52:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-311319
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 20:01:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 19:57:26 +0000   Tue, 24 Sep 2024 19:52:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 19:57:26 +0000   Tue, 24 Sep 2024 19:52:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 19:57:26 +0000   Tue, 24 Sep 2024 19:52:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 19:57:26 +0000   Tue, 24 Sep 2024 19:52:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.21
	  Hostname:    embed-certs-311319
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5aa1f90b84574d049fd1d5b4831e8f5a
	  System UUID:                5aa1f90b-8457-4d04-9fd1-d5b4831e8f5a
	  Boot ID:                    2a938032-7c38-4598-a997-31f6fe2d9d55
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-jsvdk                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 coredns-7c65d6cfc9-qgfvt                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 etcd-embed-certs-311319                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m15s
	  kube-system                 kube-apiserver-embed-certs-311319             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-embed-certs-311319    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-proxy-h42s7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-scheduler-embed-certs-311319             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-6867b74b74-xnwm4               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m8s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m21s (x8 over 9m21s)  kubelet          Node embed-certs-311319 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s (x8 over 9m21s)  kubelet          Node embed-certs-311319 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s (x7 over 9m21s)  kubelet          Node embed-certs-311319 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m15s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m15s                  kubelet          Node embed-certs-311319 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s                  kubelet          Node embed-certs-311319 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s                  kubelet          Node embed-certs-311319 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m11s                  node-controller  Node embed-certs-311319 event: Registered Node embed-certs-311319 in Controller
	
	
	==> dmesg <==
	[  +0.048034] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040696] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep24 19:47] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.800042] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.543751] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.370586] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.063360] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050821] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.173868] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.121991] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.266232] systemd-fstab-generator[693]: Ignoring "noauto" option for root device
	[  +3.732987] systemd-fstab-generator[783]: Ignoring "noauto" option for root device
	[  +1.731156] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.062452] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.492552] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.804494] kauditd_printk_skb: 85 callbacks suppressed
	[Sep24 19:52] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.601286] systemd-fstab-generator[2557]: Ignoring "noauto" option for root device
	[  +4.402424] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.137894] systemd-fstab-generator[2876]: Ignoring "noauto" option for root device
	[  +5.849886] systemd-fstab-generator[3012]: Ignoring "noauto" option for root device
	[  +0.106281] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.912819] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [c9e336118db96c0e647fbb43585d09d58a5059c71eaa9fa6344c8f8cd15176d0] <==
	{"level":"info","ts":"2024-09-24T19:52:04.585742Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-24T19:52:04.586071Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f9d71b865fa366d6","initial-advertise-peer-urls":["https://192.168.61.21:2380"],"listen-peer-urls":["https://192.168.61.21:2380"],"advertise-client-urls":["https://192.168.61.21:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.21:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T19:52:04.586934Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-24T19:52:04.587126Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.21:2380"}
	{"level":"info","ts":"2024-09-24T19:52:04.587164Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.21:2380"}
	{"level":"info","ts":"2024-09-24T19:52:05.445259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9d71b865fa366d6 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-24T19:52:05.445368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9d71b865fa366d6 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-24T19:52:05.445435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9d71b865fa366d6 received MsgPreVoteResp from f9d71b865fa366d6 at term 1"}
	{"level":"info","ts":"2024-09-24T19:52:05.445473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9d71b865fa366d6 became candidate at term 2"}
	{"level":"info","ts":"2024-09-24T19:52:05.445497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9d71b865fa366d6 received MsgVoteResp from f9d71b865fa366d6 at term 2"}
	{"level":"info","ts":"2024-09-24T19:52:05.445529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9d71b865fa366d6 became leader at term 2"}
	{"level":"info","ts":"2024-09-24T19:52:05.445555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f9d71b865fa366d6 elected leader f9d71b865fa366d6 at term 2"}
	{"level":"info","ts":"2024-09-24T19:52:05.447162Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:52:05.448031Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f9d71b865fa366d6","local-member-attributes":"{Name:embed-certs-311319 ClientURLs:[https://192.168.61.21:2379]}","request-path":"/0/members/f9d71b865fa366d6/attributes","cluster-id":"7e645ebb8a3ca2e3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T19:52:05.448226Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:52:05.448519Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:52:05.448733Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7e645ebb8a3ca2e3","local-member-id":"f9d71b865fa366d6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:52:05.448788Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T19:52:05.448828Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T19:52:05.448888Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:52:05.448932Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:52:05.449477Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:52:05.449587Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:52:05.450244Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T19:52:05.450393Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.21:2379"}
	
	
	==> kernel <==
	 20:01:25 up 14 min,  0 users,  load average: 0.00, 0.06, 0.07
	Linux embed-certs-311319 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d13b5e782473d933aff8bdaff83c0fd8fb2b7ba5f825b711df5c127957017d5d] <==
	W0924 19:51:58.204074       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:58.235203       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:58.278205       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:58.350420       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:58.402514       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:58.475449       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:58.480116       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:58.519791       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:58.543681       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:00.704228       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:00.776409       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:00.953055       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:00.957400       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.123608       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.240448       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.265223       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.330030       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.338604       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.410880       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.461854       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.468512       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.513882       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.563265       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.584777       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.642239       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ddf5bb4c542ce6d3d86717ddbba9c22cb79ced7647cb77eb559bbc0b8c1584c9] <==
	W0924 19:57:07.701772       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 19:57:07.702055       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 19:57:07.703243       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 19:57:07.703298       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 19:58:07.703769       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 19:58:07.703869       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0924 19:58:07.703928       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 19:58:07.703992       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0924 19:58:07.704993       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 19:58:07.705038       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 20:00:07.705901       1 handler_proxy.go:99] no RequestInfo found in the context
	W0924 20:00:07.705914       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 20:00:07.706111       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0924 20:00:07.706225       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 20:00:07.707439       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 20:00:07.707507       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d87a2e960ce814d9d4dfadd3629b9def9a617c612e66946b1b52b7cbe89a75fd] <==
	E0924 19:56:13.582462       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:56:14.010305       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:56:43.588373       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:56:44.018218       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:57:13.594405       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:57:14.026069       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 19:57:27.000327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-311319"
	E0924 19:57:43.599732       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:57:44.033422       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:58:13.606035       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:58:14.040582       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 19:58:16.163432       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="90.714µs"
	I0924 19:58:30.163433       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="153.726µs"
	E0924 19:58:43.611741       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:58:44.047517       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:59:13.617594       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:59:14.054734       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 19:59:43.622554       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 19:59:44.061409       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:00:13.629778       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:00:14.067781       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:00:43.635551       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:00:44.074633       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:01:13.641386       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:01:14.083552       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f63b4a01201f91ef68436ef7b9f220ee8a42a6db590a22d0094a0d0141ca5742] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 19:52:15.344510       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 19:52:15.367569       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.21"]
	E0924 19:52:15.367618       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 19:52:15.586965       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 19:52:15.587001       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 19:52:15.587025       1 server_linux.go:169] "Using iptables Proxier"
	I0924 19:52:15.594145       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 19:52:15.594380       1 server.go:483] "Version info" version="v1.31.1"
	I0924 19:52:15.594391       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:52:15.595730       1 config.go:199] "Starting service config controller"
	I0924 19:52:15.595753       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 19:52:15.595771       1 config.go:105] "Starting endpoint slice config controller"
	I0924 19:52:15.595775       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 19:52:15.599634       1 config.go:328] "Starting node config controller"
	I0924 19:52:15.599644       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 19:52:15.696807       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 19:52:15.696849       1 shared_informer.go:320] Caches are synced for service config
	I0924 19:52:15.699839       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [fca8e7b367bbab2a86cc8801e29b23fa44bc694d1f19f793edd0abd324d5d8a2] <==
	W0924 19:52:06.716855       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 19:52:06.716934       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 19:52:07.538634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 19:52:07.538683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.583533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0924 19:52:07.583638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.668732       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 19:52:07.668902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.733245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 19:52:07.733292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.734664       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 19:52:07.734812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.744995       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 19:52:07.745077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.892355       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 19:52:07.892505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.908388       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 19:52:07.908500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.920918       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 19:52:07.921014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.934878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 19:52:07.934938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:08.278578       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 19:52:08.278638       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0924 19:52:10.009494       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 20:00:17 embed-certs-311319 kubelet[2883]: E0924 20:00:17.151125    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xnwm4" podUID="dc64f26b-e4a6-4692-83d5-e6c794c1b130"
	Sep 24 20:00:19 embed-certs-311319 kubelet[2883]: E0924 20:00:19.253202    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208019252891883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:19 embed-certs-311319 kubelet[2883]: E0924 20:00:19.253455    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208019252891883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:29 embed-certs-311319 kubelet[2883]: E0924 20:00:29.255032    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208029254614329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:29 embed-certs-311319 kubelet[2883]: E0924 20:00:29.255288    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208029254614329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:31 embed-certs-311319 kubelet[2883]: E0924 20:00:31.150204    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xnwm4" podUID="dc64f26b-e4a6-4692-83d5-e6c794c1b130"
	Sep 24 20:00:39 embed-certs-311319 kubelet[2883]: E0924 20:00:39.257205    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208039256919013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:39 embed-certs-311319 kubelet[2883]: E0924 20:00:39.257244    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208039256919013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:43 embed-certs-311319 kubelet[2883]: E0924 20:00:43.151054    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xnwm4" podUID="dc64f26b-e4a6-4692-83d5-e6c794c1b130"
	Sep 24 20:00:49 embed-certs-311319 kubelet[2883]: E0924 20:00:49.258802    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208049258464927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:49 embed-certs-311319 kubelet[2883]: E0924 20:00:49.258830    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208049258464927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:54 embed-certs-311319 kubelet[2883]: E0924 20:00:54.149677    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xnwm4" podUID="dc64f26b-e4a6-4692-83d5-e6c794c1b130"
	Sep 24 20:00:59 embed-certs-311319 kubelet[2883]: E0924 20:00:59.260640    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208059260416776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:00:59 embed-certs-311319 kubelet[2883]: E0924 20:00:59.260715    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208059260416776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:01:09 embed-certs-311319 kubelet[2883]: E0924 20:01:09.150708    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xnwm4" podUID="dc64f26b-e4a6-4692-83d5-e6c794c1b130"
	Sep 24 20:01:09 embed-certs-311319 kubelet[2883]: E0924 20:01:09.159553    2883 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 20:01:09 embed-certs-311319 kubelet[2883]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 20:01:09 embed-certs-311319 kubelet[2883]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 20:01:09 embed-certs-311319 kubelet[2883]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 20:01:09 embed-certs-311319 kubelet[2883]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 20:01:09 embed-certs-311319 kubelet[2883]: E0924 20:01:09.262259    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208069262004069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:01:09 embed-certs-311319 kubelet[2883]: E0924 20:01:09.262295    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208069262004069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:01:19 embed-certs-311319 kubelet[2883]: E0924 20:01:19.264605    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208079264135550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:01:19 embed-certs-311319 kubelet[2883]: E0924 20:01:19.264987    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208079264135550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:01:21 embed-certs-311319 kubelet[2883]: E0924 20:01:21.150697    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xnwm4" podUID="dc64f26b-e4a6-4692-83d5-e6c794c1b130"
	
	
	==> storage-provisioner [34839ea54a6890ea675261ba9af3170d6b99038780d665abd04b35d45bb48f6f] <==
	I0924 19:52:16.530927       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 19:52:16.553345       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 19:52:16.553516       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 19:52:16.566116       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 19:52:16.566322       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-311319_a57cb52b-a249-4ba9-8011-a2b657fd0034!
	I0924 19:52:16.567099       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"28f73b8c-db42-48eb-ba7a-97825a01b844", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-311319_a57cb52b-a249-4ba9-8011-a2b657fd0034 became leader
	I0924 19:52:16.667287       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-311319_a57cb52b-a249-4ba9-8011-a2b657fd0034!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-311319 -n embed-certs-311319
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-311319 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-xnwm4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-311319 describe pod metrics-server-6867b74b74-xnwm4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-311319 describe pod metrics-server-6867b74b74-xnwm4: exit status 1 (66.331758ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-xnwm4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-311319 describe pod metrics-server-6867b74b74-xnwm4: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 19:55:13.536401   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 19:55:30.150556   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 19:56:01.313193   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 19:56:08.228519   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 19:56:36.599147   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 19:56:44.662276   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 19:56:57.223436   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 19:57:24.266646   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 19:57:27.584319   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 19:57:31.293342   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 19:58:07.724974   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 19:58:20.287943   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 19:58:50.648646   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 19:59:07.084937   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 19:59:38.249213   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 19:59:49.790588   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 20:00:13.536692   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 20:00:27.340789   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 20:01:08.228097   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 20:01:44.662258   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 20:01:57.223071   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 20:02:24.266873   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 20:02:27.583578   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510301 -n old-k8s-version-510301
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510301 -n old-k8s-version-510301: exit status 2 (228.762385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-510301" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510301 -n old-k8s-version-510301
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510301 -n old-k8s-version-510301: exit status 2 (223.076567ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-510301 logs -n 25
E0924 20:04:07.085714   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/auto-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-510301 logs -n 25: (1.510810829s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-038637 sudo cat                              | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:38 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo find                             | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo crio                             | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-038637                                       | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-119609 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | disable-driver-mounts-119609                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:39 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-311319            | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-965745             | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-093771  | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-510301        | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-311319                 | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-965745                  | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-093771       | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:51 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-510301             | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 19:42:46
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 19:42:46.491955   70152 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:42:46.492212   70152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:42:46.492222   70152 out.go:358] Setting ErrFile to fd 2...
	I0924 19:42:46.492228   70152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:42:46.492386   70152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:42:46.492893   70152 out.go:352] Setting JSON to false
	I0924 19:42:46.493799   70152 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5117,"bootTime":1727201849,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 19:42:46.493899   70152 start.go:139] virtualization: kvm guest
	I0924 19:42:46.496073   70152 out.go:177] * [old-k8s-version-510301] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 19:42:46.497447   70152 notify.go:220] Checking for updates...
	I0924 19:42:46.497466   70152 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 19:42:46.498899   70152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 19:42:46.500315   70152 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:42:46.502038   70152 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:42:46.503591   70152 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 19:42:46.505010   70152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 19:42:46.506789   70152 config.go:182] Loaded profile config "old-k8s-version-510301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:42:46.507239   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:42:46.507282   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:42:46.522338   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43977
	I0924 19:42:46.522810   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:42:46.523430   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:42:46.523450   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:42:46.523809   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:42:46.523989   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:42:46.525830   70152 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 19:42:46.527032   70152 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 19:42:46.527327   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:42:46.527361   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:42:46.542427   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37825
	I0924 19:42:46.542782   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:42:46.543220   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:42:46.543237   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:42:46.543562   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:42:46.543731   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:42:46.577253   70152 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 19:42:46.578471   70152 start.go:297] selected driver: kvm2
	I0924 19:42:46.578486   70152 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:42:46.578620   70152 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 19:42:46.579480   70152 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:42:46.579576   70152 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 19:42:46.595023   70152 install.go:137] /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 19:42:46.595376   70152 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:42:46.595401   70152 cni.go:84] Creating CNI manager for ""
	I0924 19:42:46.595427   70152 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:42:46.595456   70152 start.go:340] cluster config:
	{Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:42:46.595544   70152 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:42:46.597600   70152 out.go:177] * Starting "old-k8s-version-510301" primary control-plane node in "old-k8s-version-510301" cluster
	I0924 19:42:49.587099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:42:46.599107   70152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:42:46.599145   70152 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 19:42:46.599157   70152 cache.go:56] Caching tarball of preloaded images
	I0924 19:42:46.599232   70152 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 19:42:46.599246   70152 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 19:42:46.599368   70152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json ...
	I0924 19:42:46.599577   70152 start.go:360] acquireMachinesLock for old-k8s-version-510301: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:42:52.659112   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:42:58.739082   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:01.811107   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:07.891031   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:10.963093   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:17.043125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:20.115055   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:26.195121   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:29.267111   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:35.347125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:38.419109   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:44.499098   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:47.571040   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:53.651128   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:56.723110   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:02.803080   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:05.875118   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:11.955117   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:15.027102   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:21.107097   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:24.179122   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:30.259099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:33.331130   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:39.411086   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:42.483063   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:48.563071   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:51.635087   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:57.715125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:00.787050   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:06.867122   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:09.939097   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:16.019098   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:19.091109   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:25.171099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:28.243075   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:34.323040   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:37.395180   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:43.475096   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:46.547060   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:52.627035   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:55.699131   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:58.703628   69576 start.go:364] duration metric: took 4m21.10107111s to acquireMachinesLock for "no-preload-965745"
	I0924 19:45:58.703677   69576 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:45:58.703682   69576 fix.go:54] fixHost starting: 
	I0924 19:45:58.704078   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:45:58.704123   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:45:58.719888   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32803
	I0924 19:45:58.720250   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:45:58.720694   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:45:58.720714   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:45:58.721073   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:45:58.721262   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:45:58.721419   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:45:58.723062   69576 fix.go:112] recreateIfNeeded on no-preload-965745: state=Stopped err=<nil>
	I0924 19:45:58.723086   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	W0924 19:45:58.723253   69576 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:45:58.725047   69576 out.go:177] * Restarting existing kvm2 VM for "no-preload-965745" ...
	I0924 19:45:58.701057   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:45:58.701123   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:45:58.701448   69408 buildroot.go:166] provisioning hostname "embed-certs-311319"
	I0924 19:45:58.701474   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:45:58.701688   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:45:58.703495   69408 machine.go:96] duration metric: took 4m37.423499364s to provisionDockerMachine
	I0924 19:45:58.703530   69408 fix.go:56] duration metric: took 4m37.446368089s for fixHost
	I0924 19:45:58.703536   69408 start.go:83] releasing machines lock for "embed-certs-311319", held for 4m37.446384972s
	W0924 19:45:58.703575   69408 start.go:714] error starting host: provision: host is not running
	W0924 19:45:58.703648   69408 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0924 19:45:58.703659   69408 start.go:729] Will try again in 5 seconds ...
	I0924 19:45:58.726232   69576 main.go:141] libmachine: (no-preload-965745) Calling .Start
	I0924 19:45:58.726397   69576 main.go:141] libmachine: (no-preload-965745) Ensuring networks are active...
	I0924 19:45:58.727100   69576 main.go:141] libmachine: (no-preload-965745) Ensuring network default is active
	I0924 19:45:58.727392   69576 main.go:141] libmachine: (no-preload-965745) Ensuring network mk-no-preload-965745 is active
	I0924 19:45:58.727758   69576 main.go:141] libmachine: (no-preload-965745) Getting domain xml...
	I0924 19:45:58.728339   69576 main.go:141] libmachine: (no-preload-965745) Creating domain...
	I0924 19:45:59.928391   69576 main.go:141] libmachine: (no-preload-965745) Waiting to get IP...
	I0924 19:45:59.929441   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:45:59.929931   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:45:59.929982   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:45:59.929905   70821 retry.go:31] will retry after 231.188723ms: waiting for machine to come up
	I0924 19:46:00.162502   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.162993   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.163021   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.162944   70821 retry.go:31] will retry after 278.953753ms: waiting for machine to come up
	I0924 19:46:00.443443   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.443868   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.443895   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.443830   70821 retry.go:31] will retry after 307.192984ms: waiting for machine to come up
	I0924 19:46:00.752227   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.752637   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.752666   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.752602   70821 retry.go:31] will retry after 596.967087ms: waiting for machine to come up
	I0924 19:46:01.351461   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:01.351906   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:01.351933   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:01.351859   70821 retry.go:31] will retry after 579.94365ms: waiting for machine to come up
	I0924 19:46:01.933682   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:01.934110   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:01.934141   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:01.934070   70821 retry.go:31] will retry after 862.980289ms: waiting for machine to come up
	I0924 19:46:03.705206   69408 start.go:360] acquireMachinesLock for embed-certs-311319: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:46:02.799129   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:02.799442   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:02.799471   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:02.799394   70821 retry.go:31] will retry after 992.898394ms: waiting for machine to come up
	I0924 19:46:03.794034   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:03.794462   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:03.794518   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:03.794440   70821 retry.go:31] will retry after 917.82796ms: waiting for machine to come up
	I0924 19:46:04.713515   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:04.713888   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:04.713911   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:04.713861   70821 retry.go:31] will retry after 1.30142733s: waiting for machine to come up
	I0924 19:46:06.017327   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:06.017868   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:06.017891   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:06.017835   70821 retry.go:31] will retry after 1.585023602s: waiting for machine to come up
	I0924 19:46:07.603787   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:07.604129   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:07.604148   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:07.604108   70821 retry.go:31] will retry after 2.382871382s: waiting for machine to come up
	I0924 19:46:09.989065   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:09.989530   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:09.989592   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:09.989504   70821 retry.go:31] will retry after 3.009655055s: waiting for machine to come up
	I0924 19:46:17.011094   69904 start.go:364] duration metric: took 3m57.677491969s to acquireMachinesLock for "default-k8s-diff-port-093771"
	I0924 19:46:17.011169   69904 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:17.011180   69904 fix.go:54] fixHost starting: 
	I0924 19:46:17.011578   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:17.011648   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:17.030756   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I0924 19:46:17.031186   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:17.031698   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:46:17.031722   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:17.032028   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:17.032198   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:17.032340   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:46:17.033737   69904 fix.go:112] recreateIfNeeded on default-k8s-diff-port-093771: state=Stopped err=<nil>
	I0924 19:46:17.033761   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	W0924 19:46:17.033912   69904 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:17.036154   69904 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-093771" ...
	I0924 19:46:13.001046   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:13.001487   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:13.001518   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:13.001448   70821 retry.go:31] will retry after 2.789870388s: waiting for machine to come up
	I0924 19:46:15.792496   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.793014   69576 main.go:141] libmachine: (no-preload-965745) Found IP for machine: 192.168.39.134
	I0924 19:46:15.793035   69576 main.go:141] libmachine: (no-preload-965745) Reserving static IP address...
	I0924 19:46:15.793051   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has current primary IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.793564   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "no-preload-965745", mac: "52:54:00:c4:4b:79", ip: "192.168.39.134"} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.793590   69576 main.go:141] libmachine: (no-preload-965745) DBG | skip adding static IP to network mk-no-preload-965745 - found existing host DHCP lease matching {name: "no-preload-965745", mac: "52:54:00:c4:4b:79", ip: "192.168.39.134"}
	I0924 19:46:15.793602   69576 main.go:141] libmachine: (no-preload-965745) Reserved static IP address: 192.168.39.134
	I0924 19:46:15.793631   69576 main.go:141] libmachine: (no-preload-965745) DBG | Getting to WaitForSSH function...
	I0924 19:46:15.793643   69576 main.go:141] libmachine: (no-preload-965745) Waiting for SSH to be available...
	I0924 19:46:15.795732   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.796002   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.796023   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.796169   69576 main.go:141] libmachine: (no-preload-965745) DBG | Using SSH client type: external
	I0924 19:46:15.796196   69576 main.go:141] libmachine: (no-preload-965745) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa (-rw-------)
	I0924 19:46:15.796227   69576 main.go:141] libmachine: (no-preload-965745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:15.796241   69576 main.go:141] libmachine: (no-preload-965745) DBG | About to run SSH command:
	I0924 19:46:15.796247   69576 main.go:141] libmachine: (no-preload-965745) DBG | exit 0
	I0924 19:46:15.922480   69576 main.go:141] libmachine: (no-preload-965745) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:15.922886   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetConfigRaw
	I0924 19:46:15.923532   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:15.925814   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.926152   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.926180   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.926341   69576 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/config.json ...
	I0924 19:46:15.926506   69576 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:15.926523   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:15.926755   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:15.929175   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.929512   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.929539   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.929647   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:15.929805   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:15.929956   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:15.930041   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:15.930184   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:15.930374   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:15.930386   69576 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:16.038990   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:16.039018   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.039241   69576 buildroot.go:166] provisioning hostname "no-preload-965745"
	I0924 19:46:16.039266   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.039459   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.042183   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.042567   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.042603   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.042728   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.042929   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.043085   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.043264   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.043431   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.043611   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.043624   69576 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-965745 && echo "no-preload-965745" | sudo tee /etc/hostname
	I0924 19:46:16.163262   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-965745
	
	I0924 19:46:16.163289   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.165935   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.166256   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.166276   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.166415   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.166602   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.166728   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.166876   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.167005   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.167219   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.167244   69576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-965745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-965745/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-965745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:16.282661   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:16.282689   69576 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:16.282714   69576 buildroot.go:174] setting up certificates
	I0924 19:46:16.282723   69576 provision.go:84] configureAuth start
	I0924 19:46:16.282734   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.283017   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:16.285665   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.286113   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.286140   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.286283   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.288440   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.288750   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.288775   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.288932   69576 provision.go:143] copyHostCerts
	I0924 19:46:16.288984   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:16.288996   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:16.289093   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:16.289206   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:16.289221   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:16.289265   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:16.289341   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:16.289350   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:16.289385   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:16.289451   69576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.no-preload-965745 san=[127.0.0.1 192.168.39.134 localhost minikube no-preload-965745]
	I0924 19:46:16.400236   69576 provision.go:177] copyRemoteCerts
	I0924 19:46:16.400302   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:16.400330   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.402770   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.403069   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.403107   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.403226   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.403415   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.403678   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.403826   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:16.488224   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:16.509856   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 19:46:16.531212   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:16.552758   69576 provision.go:87] duration metric: took 270.023746ms to configureAuth
	I0924 19:46:16.552787   69576 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:16.552980   69576 config.go:182] Loaded profile config "no-preload-965745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:16.553045   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.555463   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.555792   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.555812   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.555992   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.556190   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.556337   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.556447   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.556569   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.556756   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.556774   69576 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:16.777283   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:16.777305   69576 machine.go:96] duration metric: took 850.787273ms to provisionDockerMachine
	I0924 19:46:16.777318   69576 start.go:293] postStartSetup for "no-preload-965745" (driver="kvm2")
	I0924 19:46:16.777330   69576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:16.777348   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:16.777726   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:16.777751   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.780187   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.780591   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.780632   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.780812   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.781015   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.781163   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.781359   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:16.864642   69576 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:16.868296   69576 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:16.868317   69576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:16.868379   69576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:16.868456   69576 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:16.868549   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:16.877019   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:16.898717   69576 start.go:296] duration metric: took 121.386885ms for postStartSetup
	I0924 19:46:16.898752   69576 fix.go:56] duration metric: took 18.195069583s for fixHost
	I0924 19:46:16.898772   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.901284   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.901593   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.901620   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.901773   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.901965   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.902143   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.902278   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.902416   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.902572   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.902580   69576 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:17.010942   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207176.987992125
	
	I0924 19:46:17.010968   69576 fix.go:216] guest clock: 1727207176.987992125
	I0924 19:46:17.010977   69576 fix.go:229] Guest: 2024-09-24 19:46:16.987992125 +0000 UTC Remote: 2024-09-24 19:46:16.898755451 +0000 UTC m=+279.432619611 (delta=89.236674ms)
	I0924 19:46:17.011002   69576 fix.go:200] guest clock delta is within tolerance: 89.236674ms
	I0924 19:46:17.011008   69576 start.go:83] releasing machines lock for "no-preload-965745", held for 18.307345605s
	I0924 19:46:17.011044   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.011314   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:17.014130   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.014475   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.014510   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.014661   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015160   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015331   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015443   69576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:17.015485   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:17.015512   69576 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:17.015536   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:17.018062   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018324   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018392   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.018416   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018531   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:17.018681   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:17.018754   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.018805   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018814   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:17.018956   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:17.019039   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:17.019130   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:17.019295   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:17.019483   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:17.120138   69576 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:17.125567   69576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:17.269403   69576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:17.275170   69576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:17.275229   69576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:17.290350   69576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:17.290374   69576 start.go:495] detecting cgroup driver to use...
	I0924 19:46:17.290437   69576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:17.310059   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:17.323377   69576 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:17.323440   69576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:17.336247   69576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:17.349168   69576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:17.461240   69576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:17.606562   69576 docker.go:233] disabling docker service ...
	I0924 19:46:17.606632   69576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:17.623001   69576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:17.637472   69576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:17.778735   69576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:17.905408   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:17.921465   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:17.938193   69576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:46:17.938265   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.947686   69576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:17.947748   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.957230   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.966507   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.975768   69576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:17.985288   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.995405   69576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:18.011401   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:18.024030   69576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:18.034873   69576 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:18.034939   69576 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:18.047359   69576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:18.057288   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:18.181067   69576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:18.272703   69576 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:18.272779   69576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:18.277272   69576 start.go:563] Will wait 60s for crictl version
	I0924 19:46:18.277338   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.280914   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:18.319509   69576 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:18.319603   69576 ssh_runner.go:195] Run: crio --version
	I0924 19:46:18.349619   69576 ssh_runner.go:195] Run: crio --version
	I0924 19:46:18.376567   69576 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:46:17.037598   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Start
	I0924 19:46:17.037763   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring networks are active...
	I0924 19:46:17.038517   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring network default is active
	I0924 19:46:17.038875   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring network mk-default-k8s-diff-port-093771 is active
	I0924 19:46:17.039247   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Getting domain xml...
	I0924 19:46:17.039971   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Creating domain...
	I0924 19:46:18.369133   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting to get IP...
	I0924 19:46:18.370069   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.370537   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.370589   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.370490   70958 retry.go:31] will retry after 309.496724ms: waiting for machine to come up
	I0924 19:46:18.682355   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.682933   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.682982   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.682901   70958 retry.go:31] will retry after 274.120659ms: waiting for machine to come up
	I0924 19:46:18.958554   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.959017   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.959044   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.958981   70958 retry.go:31] will retry after 301.44935ms: waiting for machine to come up
	I0924 19:46:18.377928   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:18.380767   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:18.381227   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:18.381343   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:18.381519   69576 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:18.385510   69576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:18.398125   69576 kubeadm.go:883] updating cluster {Name:no-preload-965745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:18.398269   69576 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:46:18.398324   69576 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:18.433136   69576 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:46:18.433158   69576 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 19:46:18.433221   69576 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:18.433232   69576 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.433266   69576 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.433288   69576 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.433295   69576 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.433348   69576 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.433369   69576 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0924 19:46:18.433406   69576 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.435096   69576 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.435095   69576 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.435130   69576 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0924 19:46:18.435125   69576 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.435167   69576 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.435282   69576 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.435312   69576 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:18.435355   69576 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.586269   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.594361   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.594399   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.595814   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.600629   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.625054   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.626264   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0924 19:46:18.648420   69576 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0924 19:46:18.648471   69576 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.648519   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.736906   69576 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0924 19:46:18.736967   69576 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.736995   69576 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0924 19:46:18.737033   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.737038   69576 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.736924   69576 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0924 19:46:18.737086   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.737094   69576 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.737129   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.738294   69576 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0924 19:46:18.738322   69576 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.738372   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.759842   69576 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0924 19:46:18.759877   69576 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.759920   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.863913   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.864011   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.863924   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.863940   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.863970   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.863980   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.982915   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.982954   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.983003   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:19.005899   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:19.005922   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:19.005993   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:19.085255   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:19.085357   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:19.085385   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:19.140884   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:19.140951   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:19.141049   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:19.186906   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0924 19:46:19.187032   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.190934   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0924 19:46:19.191034   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:19.219210   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0924 19:46:19.219345   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:19.250400   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0924 19:46:19.250433   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0924 19:46:19.250510   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:19.250510   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0924 19:46:19.250541   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0924 19:46:19.250557   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.250511   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:19.250575   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0924 19:46:19.250589   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:19.250595   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0924 19:46:19.250597   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.263357   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0924 19:46:19.422736   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:21.705978   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.455378333s)
	I0924 19:46:21.706013   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.455386133s)
	I0924 19:46:21.706050   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0924 19:46:21.706075   69576 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:21.706086   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.455478137s)
	I0924 19:46:21.706116   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0924 19:46:21.706023   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0924 19:46:21.706127   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:21.706162   69576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.283401294s)
	I0924 19:46:21.706195   69576 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0924 19:46:21.706223   69576 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:21.706267   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:19.262500   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.263016   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.263065   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:19.262997   70958 retry.go:31] will retry after 463.004617ms: waiting for machine to come up
	I0924 19:46:19.727528   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.728017   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.728039   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:19.727972   70958 retry.go:31] will retry after 463.942506ms: waiting for machine to come up
	I0924 19:46:20.193614   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.194039   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.194066   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:20.193993   70958 retry.go:31] will retry after 595.200456ms: waiting for machine to come up
	I0924 19:46:20.790814   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.791264   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.791290   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:20.791229   70958 retry.go:31] will retry after 862.850861ms: waiting for machine to come up
	I0924 19:46:21.655227   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:21.655702   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:21.655732   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:21.655652   70958 retry.go:31] will retry after 1.436744818s: waiting for machine to come up
	I0924 19:46:23.093891   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:23.094619   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:23.094652   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:23.094545   70958 retry.go:31] will retry after 1.670034049s: waiting for machine to come up
	I0924 19:46:23.573866   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.867718194s)
	I0924 19:46:23.573911   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0924 19:46:23.573942   69576 ssh_runner.go:235] Completed: which crictl: (1.867653076s)
	I0924 19:46:23.574009   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:23.573947   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:23.574079   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:24.924292   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.35018601s)
	I0924 19:46:24.924325   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0924 19:46:24.924325   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.350292754s)
	I0924 19:46:24.924351   69576 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:24.924400   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:24.924400   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:24.765982   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:24.766453   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:24.766486   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:24.766399   70958 retry.go:31] will retry after 2.142103801s: waiting for machine to come up
	I0924 19:46:26.911998   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:26.912395   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:26.912425   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:26.912350   70958 retry.go:31] will retry after 1.90953864s: waiting for machine to come up
	I0924 19:46:28.823807   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:28.824294   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:28.824324   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:28.824242   70958 retry.go:31] will retry after 2.249657554s: waiting for machine to come up
	I0924 19:46:28.202705   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.278273074s)
	I0924 19:46:28.202736   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0924 19:46:28.202759   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:28.202781   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.278300546s)
	I0924 19:46:28.202798   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:28.202862   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:29.870161   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.667334937s)
	I0924 19:46:29.870195   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0924 19:46:29.870161   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.667273921s)
	I0924 19:46:29.870218   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:29.870248   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0924 19:46:29.870269   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:29.870357   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.922800   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.05250542s)
	I0924 19:46:31.922865   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0924 19:46:31.922894   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.052511751s)
	I0924 19:46:31.922928   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0924 19:46:31.922938   69576 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.922996   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.076197   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:31.076624   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:31.076660   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:31.076579   70958 retry.go:31] will retry after 3.538260641s: waiting for machine to come up
	I0924 19:46:35.823566   70152 start.go:364] duration metric: took 3m49.223945366s to acquireMachinesLock for "old-k8s-version-510301"
	I0924 19:46:35.823654   70152 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:35.823666   70152 fix.go:54] fixHost starting: 
	I0924 19:46:35.824101   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:35.824161   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:35.844327   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38055
	I0924 19:46:35.844741   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:35.845377   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:46:35.845402   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:35.845769   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:35.845997   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:35.846186   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetState
	I0924 19:46:35.847728   70152 fix.go:112] recreateIfNeeded on old-k8s-version-510301: state=Stopped err=<nil>
	I0924 19:46:35.847754   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	W0924 19:46:35.847912   70152 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:35.849981   70152 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-510301" ...
	I0924 19:46:35.851388   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .Start
	I0924 19:46:35.851573   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring networks are active...
	I0924 19:46:35.852445   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring network default is active
	I0924 19:46:35.852832   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring network mk-old-k8s-version-510301 is active
	I0924 19:46:35.853342   70152 main.go:141] libmachine: (old-k8s-version-510301) Getting domain xml...
	I0924 19:46:35.854028   70152 main.go:141] libmachine: (old-k8s-version-510301) Creating domain...
	I0924 19:46:34.618473   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.618980   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Found IP for machine: 192.168.50.116
	I0924 19:46:34.619006   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Reserving static IP address...
	I0924 19:46:34.619022   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has current primary IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.619475   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-093771", mac: "52:54:00:21:4a:f5", ip: "192.168.50.116"} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.619520   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Reserved static IP address: 192.168.50.116
	I0924 19:46:34.619540   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | skip adding static IP to network mk-default-k8s-diff-port-093771 - found existing host DHCP lease matching {name: "default-k8s-diff-port-093771", mac: "52:54:00:21:4a:f5", ip: "192.168.50.116"}
	I0924 19:46:34.619559   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Getting to WaitForSSH function...
	I0924 19:46:34.619573   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for SSH to be available...
	I0924 19:46:34.621893   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.622318   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.622346   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.622525   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Using SSH client type: external
	I0924 19:46:34.622553   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa (-rw-------)
	I0924 19:46:34.622584   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:34.622603   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | About to run SSH command:
	I0924 19:46:34.622621   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | exit 0
	I0924 19:46:34.746905   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:34.747246   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetConfigRaw
	I0924 19:46:34.747867   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:34.750507   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.751020   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.751052   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.751327   69904 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/config.json ...
	I0924 19:46:34.751516   69904 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:34.751533   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:34.751773   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.754088   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.754380   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.754400   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.754510   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.754703   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.754988   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.755201   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.755479   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.755714   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.755727   69904 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:34.854791   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:34.854816   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:34.855126   69904 buildroot.go:166] provisioning hostname "default-k8s-diff-port-093771"
	I0924 19:46:34.855157   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:34.855362   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.858116   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.858459   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.858491   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.858639   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.858821   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.859002   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.859124   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.859281   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.859444   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.859458   69904 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-093771 && echo "default-k8s-diff-port-093771" | sudo tee /etc/hostname
	I0924 19:46:34.974247   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-093771
	
	I0924 19:46:34.974285   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.977117   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.977514   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.977544   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.977781   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.978011   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.978184   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.978326   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.978512   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.978736   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.978761   69904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-093771' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-093771/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-093771' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:35.096102   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:35.096132   69904 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:35.096172   69904 buildroot.go:174] setting up certificates
	I0924 19:46:35.096182   69904 provision.go:84] configureAuth start
	I0924 19:46:35.096192   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:35.096501   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:35.099177   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.099529   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.099563   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.099743   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.102392   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.102744   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.102771   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.102941   69904 provision.go:143] copyHostCerts
	I0924 19:46:35.102988   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:35.102996   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:35.103053   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:35.103147   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:35.103155   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:35.103176   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:35.103229   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:35.103237   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:35.103255   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:35.103319   69904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-093771 san=[127.0.0.1 192.168.50.116 default-k8s-diff-port-093771 localhost minikube]
	I0924 19:46:35.213279   69904 provision.go:177] copyRemoteCerts
	I0924 19:46:35.213364   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:35.213396   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.216668   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.217114   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.217150   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.217374   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.217544   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.217759   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.217937   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.300483   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:35.323893   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0924 19:46:35.346838   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:35.368788   69904 provision.go:87] duration metric: took 272.591773ms to configureAuth
	I0924 19:46:35.368819   69904 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:35.369032   69904 config.go:182] Loaded profile config "default-k8s-diff-port-093771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:35.369107   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.372264   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.372571   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.372601   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.372833   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.373033   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.373221   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.373395   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.373595   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:35.373768   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:35.373800   69904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:35.593954   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:35.593983   69904 machine.go:96] duration metric: took 842.454798ms to provisionDockerMachine
	I0924 19:46:35.593998   69904 start.go:293] postStartSetup for "default-k8s-diff-port-093771" (driver="kvm2")
	I0924 19:46:35.594011   69904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:35.594032   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.594381   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:35.594415   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.597073   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.597475   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.597531   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.597668   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.597886   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.598061   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.598225   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.677749   69904 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:35.682185   69904 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:35.682220   69904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:35.682302   69904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:35.682402   69904 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:35.682514   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:35.692308   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:35.717006   69904 start.go:296] duration metric: took 122.993776ms for postStartSetup
	I0924 19:46:35.717045   69904 fix.go:56] duration metric: took 18.705866197s for fixHost
	I0924 19:46:35.717069   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.720111   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.720478   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.720507   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.720702   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.720913   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.721078   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.721208   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.721368   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:35.721547   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:35.721558   69904 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:35.823421   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207195.798332273
	
	I0924 19:46:35.823444   69904 fix.go:216] guest clock: 1727207195.798332273
	I0924 19:46:35.823454   69904 fix.go:229] Guest: 2024-09-24 19:46:35.798332273 +0000 UTC Remote: 2024-09-24 19:46:35.717049796 +0000 UTC m=+256.522802974 (delta=81.282477ms)
	I0924 19:46:35.823478   69904 fix.go:200] guest clock delta is within tolerance: 81.282477ms
	I0924 19:46:35.823484   69904 start.go:83] releasing machines lock for "default-k8s-diff-port-093771", held for 18.812344302s
	I0924 19:46:35.823511   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.823795   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:35.827240   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.827580   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.827612   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.827798   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828501   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828695   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828788   69904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:35.828840   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.828982   69904 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:35.829022   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.831719   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.831888   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832098   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.832125   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832350   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.832419   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.832446   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832518   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.832608   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.832688   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.832761   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.832834   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.832898   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.833000   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.913010   69904 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:35.936917   69904 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:36.082528   69904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:36.090012   69904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:36.090111   69904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:36.109409   69904 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:36.109434   69904 start.go:495] detecting cgroup driver to use...
	I0924 19:46:36.109509   69904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:36.130226   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:36.142975   69904 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:36.143037   69904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:36.159722   69904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:36.174702   69904 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:36.315361   69904 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:36.491190   69904 docker.go:233] disabling docker service ...
	I0924 19:46:36.491259   69904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:36.513843   69904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:36.530208   69904 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:36.658600   69904 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:36.806048   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:36.821825   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:36.841750   69904 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:46:36.841819   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.853349   69904 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:36.853432   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.865214   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.877600   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.889363   69904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:36.901434   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.911763   69904 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.929057   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.939719   69904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:36.949326   69904 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:36.949399   69904 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:36.969647   69904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:36.984522   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:37.132041   69904 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:37.238531   69904 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:37.238638   69904 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:37.243752   69904 start.go:563] Will wait 60s for crictl version
	I0924 19:46:37.243811   69904 ssh_runner.go:195] Run: which crictl
	I0924 19:46:37.247683   69904 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:37.282843   69904 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:37.282932   69904 ssh_runner.go:195] Run: crio --version
	I0924 19:46:37.318022   69904 ssh_runner.go:195] Run: crio --version
	I0924 19:46:37.356586   69904 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:46:32.569181   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0924 19:46:32.569229   69576 cache_images.go:123] Successfully loaded all cached images
	I0924 19:46:32.569236   69576 cache_images.go:92] duration metric: took 14.136066072s to LoadCachedImages
	I0924 19:46:32.569250   69576 kubeadm.go:934] updating node { 192.168.39.134 8443 v1.31.1 crio true true} ...
	I0924 19:46:32.569372   69576 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-965745 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:46:32.569453   69576 ssh_runner.go:195] Run: crio config
	I0924 19:46:32.610207   69576 cni.go:84] Creating CNI manager for ""
	I0924 19:46:32.610236   69576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:32.610247   69576 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:46:32.610284   69576 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.134 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-965745 NodeName:no-preload-965745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:46:32.610407   69576 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-965745"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:46:32.610465   69576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:46:32.620532   69576 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:46:32.620616   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:46:32.629642   69576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 19:46:32.644863   69576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:46:32.659420   69576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0924 19:46:32.674590   69576 ssh_runner.go:195] Run: grep 192.168.39.134	control-plane.minikube.internal$ /etc/hosts
	I0924 19:46:32.677861   69576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:32.688560   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:32.791827   69576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:32.807240   69576 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745 for IP: 192.168.39.134
	I0924 19:46:32.807266   69576 certs.go:194] generating shared ca certs ...
	I0924 19:46:32.807286   69576 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:32.807447   69576 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:46:32.807502   69576 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:46:32.807515   69576 certs.go:256] generating profile certs ...
	I0924 19:46:32.807645   69576 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/client.key
	I0924 19:46:32.807736   69576 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.key.6934b726
	I0924 19:46:32.807799   69576 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.key
	I0924 19:46:32.807950   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:46:32.807997   69576 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:46:32.808011   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:46:32.808045   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:46:32.808076   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:46:32.808111   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:46:32.808168   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:32.809039   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:46:32.866086   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:46:32.892458   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:46:32.925601   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:46:32.956936   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 19:46:32.979570   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 19:46:33.001159   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:46:33.022216   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 19:46:33.044213   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:46:33.065352   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:46:33.086229   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:46:33.107040   69576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:46:33.122285   69576 ssh_runner.go:195] Run: openssl version
	I0924 19:46:33.127664   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:46:33.137277   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.141239   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.141289   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.146498   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:46:33.156352   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:46:33.166235   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.170189   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.170233   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.175345   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:46:33.185095   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:46:33.194846   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.199024   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.199084   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.204244   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:46:33.214142   69576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:46:33.218178   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:46:33.223659   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:46:33.228914   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:46:33.234183   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:46:33.239611   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:46:33.244844   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:46:33.250012   69576 kubeadm.go:392] StartCluster: {Name:no-preload-965745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:46:33.250094   69576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:46:33.250128   69576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:33.282919   69576 cri.go:89] found id: ""
	I0924 19:46:33.282980   69576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:46:33.292578   69576 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:46:33.292605   69576 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:46:33.292665   69576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:46:33.301695   69576 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:46:33.303477   69576 kubeconfig.go:125] found "no-preload-965745" server: "https://192.168.39.134:8443"
	I0924 19:46:33.306052   69576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:46:33.314805   69576 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.134
	I0924 19:46:33.314843   69576 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:46:33.314857   69576 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:46:33.314907   69576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:33.346457   69576 cri.go:89] found id: ""
	I0924 19:46:33.346523   69576 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:46:33.361257   69576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:46:33.370192   69576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:46:33.370209   69576 kubeadm.go:157] found existing configuration files:
	
	I0924 19:46:33.370246   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:46:33.378693   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:46:33.378735   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:46:33.387379   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:46:33.395516   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:46:33.395555   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:46:33.404216   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:46:33.412518   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:46:33.412564   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:46:33.421332   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:46:33.430004   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:46:33.430067   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:46:33.438769   69576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:46:33.447918   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:33.547090   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.162139   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.345688   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.400915   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.479925   69576 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:46:34.480005   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:34.980773   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:35.480568   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:35.515707   69576 api_server.go:72] duration metric: took 1.035779291s to wait for apiserver process to appear ...
	I0924 19:46:35.515736   69576 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:46:35.515759   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:37.357928   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:37.361222   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:37.361720   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:37.361763   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:37.362089   69904 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:37.366395   69904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:37.383334   69904 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-093771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:37.383451   69904 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:46:37.383503   69904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:37.425454   69904 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:46:37.425528   69904 ssh_runner.go:195] Run: which lz4
	I0924 19:46:37.430589   69904 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:46:37.435668   69904 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:46:37.435702   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 19:46:38.688183   69904 crio.go:462] duration metric: took 1.257629121s to copy over tarball
	I0924 19:46:38.688265   69904 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:46:38.577925   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:38.577956   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:38.577971   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:38.617929   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:38.617970   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:39.015942   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:39.024069   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:39.024108   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:39.516830   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:39.522389   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:39.522423   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:40.015905   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:40.024316   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:40.024344   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:40.515871   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:40.524708   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0924 19:46:40.533300   69576 api_server.go:141] control plane version: v1.31.1
	I0924 19:46:40.533330   69576 api_server.go:131] duration metric: took 5.017586868s to wait for apiserver health ...
	I0924 19:46:40.533341   69576 cni.go:84] Creating CNI manager for ""
	I0924 19:46:40.533350   69576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:40.535207   69576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:46:37.184620   70152 main.go:141] libmachine: (old-k8s-version-510301) Waiting to get IP...
	I0924 19:46:37.185660   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.186074   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.186151   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.186052   71118 retry.go:31] will retry after 294.949392ms: waiting for machine to come up
	I0924 19:46:37.482814   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.483327   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.483356   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.483268   71118 retry.go:31] will retry after 344.498534ms: waiting for machine to come up
	I0924 19:46:37.830045   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.830715   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.830748   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.830647   71118 retry.go:31] will retry after 342.025563ms: waiting for machine to come up
	I0924 19:46:38.174408   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:38.176008   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:38.176040   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:38.175906   71118 retry.go:31] will retry after 456.814011ms: waiting for machine to come up
	I0924 19:46:38.634792   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:38.635533   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:38.635566   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:38.635443   71118 retry.go:31] will retry after 582.88697ms: waiting for machine to come up
	I0924 19:46:39.220373   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:39.220869   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:39.220899   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:39.220811   71118 retry.go:31] will retry after 648.981338ms: waiting for machine to come up
	I0924 19:46:39.872016   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:39.872615   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:39.872645   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:39.872571   71118 retry.go:31] will retry after 1.138842254s: waiting for machine to come up
	I0924 19:46:41.012974   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:41.013539   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:41.013575   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:41.013489   71118 retry.go:31] will retry after 996.193977ms: waiting for machine to come up
	I0924 19:46:40.536733   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:46:40.547944   69576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:46:40.577608   69576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:46:40.595845   69576 system_pods.go:59] 8 kube-system pods found
	I0924 19:46:40.595910   69576 system_pods.go:61] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:46:40.595922   69576 system_pods.go:61] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:46:40.595934   69576 system_pods.go:61] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:46:40.595947   69576 system_pods.go:61] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:46:40.595957   69576 system_pods.go:61] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 19:46:40.595967   69576 system_pods.go:61] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:46:40.595980   69576 system_pods.go:61] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:46:40.595986   69576 system_pods.go:61] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:46:40.595995   69576 system_pods.go:74] duration metric: took 18.365618ms to wait for pod list to return data ...
	I0924 19:46:40.596006   69576 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:46:40.599781   69576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:46:40.599809   69576 node_conditions.go:123] node cpu capacity is 2
	I0924 19:46:40.599822   69576 node_conditions.go:105] duration metric: took 3.810089ms to run NodePressure ...
	I0924 19:46:40.599842   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:40.916081   69576 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:46:40.921516   69576 kubeadm.go:739] kubelet initialised
	I0924 19:46:40.921545   69576 kubeadm.go:740] duration metric: took 5.434388ms waiting for restarted kubelet to initialise ...
	I0924 19:46:40.921569   69576 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:40.926954   69576 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.931807   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.931825   69576 pod_ready.go:82] duration metric: took 4.85217ms for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.931833   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.931840   69576 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.936614   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "etcd-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.936636   69576 pod_ready.go:82] duration metric: took 4.788888ms for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.936646   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "etcd-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.936654   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.941669   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-apiserver-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.941684   69576 pod_ready.go:82] duration metric: took 5.022921ms for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.941691   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-apiserver-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.941697   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.981457   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.981487   69576 pod_ready.go:82] duration metric: took 39.779589ms for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.981500   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.981512   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:41.381145   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-proxy-ng8vf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.381172   69576 pod_ready.go:82] duration metric: took 399.651445ms for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:41.381183   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-proxy-ng8vf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.381191   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:41.780780   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-scheduler-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.780802   69576 pod_ready.go:82] duration metric: took 399.60413ms for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:41.780811   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-scheduler-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.780818   69576 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:42.181235   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:42.181264   69576 pod_ready.go:82] duration metric: took 400.43573ms for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:42.181278   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:42.181287   69576 pod_ready.go:39] duration metric: took 1.259692411s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:42.181306   69576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:46:42.192253   69576 ops.go:34] apiserver oom_adj: -16
	I0924 19:46:42.192274   69576 kubeadm.go:597] duration metric: took 8.899661487s to restartPrimaryControlPlane
	I0924 19:46:42.192285   69576 kubeadm.go:394] duration metric: took 8.942279683s to StartCluster
	I0924 19:46:42.192302   69576 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:42.192388   69576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:46:42.194586   69576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:42.194926   69576 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:46:42.195047   69576 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:46:42.195118   69576 addons.go:69] Setting storage-provisioner=true in profile "no-preload-965745"
	I0924 19:46:42.195137   69576 addons.go:234] Setting addon storage-provisioner=true in "no-preload-965745"
	W0924 19:46:42.195145   69576 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:46:42.195150   69576 addons.go:69] Setting default-storageclass=true in profile "no-preload-965745"
	I0924 19:46:42.195167   69576 addons.go:69] Setting metrics-server=true in profile "no-preload-965745"
	I0924 19:46:42.195174   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.195177   69576 config.go:182] Loaded profile config "no-preload-965745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:42.195182   69576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-965745"
	I0924 19:46:42.195185   69576 addons.go:234] Setting addon metrics-server=true in "no-preload-965745"
	W0924 19:46:42.195194   69576 addons.go:243] addon metrics-server should already be in state true
	I0924 19:46:42.195219   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.195593   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195609   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195629   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195643   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.195658   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.195736   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.196723   69576 out.go:177] * Verifying Kubernetes components...
	I0924 19:46:42.198152   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:42.212617   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0924 19:46:42.213165   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.213669   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.213695   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.214078   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.214268   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.216100   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45549
	I0924 19:46:42.216467   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.216915   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.216934   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.217274   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.217317   69576 addons.go:234] Setting addon default-storageclass=true in "no-preload-965745"
	W0924 19:46:42.217329   69576 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:46:42.217357   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.217629   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.217666   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.217870   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.217915   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.236569   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I0924 19:46:42.236995   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.236999   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I0924 19:46:42.237477   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.237606   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.237630   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.237989   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.238081   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.238103   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.238605   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.238645   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.238851   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.239570   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.239624   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.243303   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0924 19:46:42.243749   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.244205   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.244225   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.244541   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.244860   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.246518   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.248349   69576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:42.249690   69576 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:46:42.249706   69576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:46:42.249724   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.256169   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0924 19:46:42.256413   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.256626   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.256648   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.256801   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.256952   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.257080   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.257136   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.257247   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.257656   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.257671   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.257975   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.258190   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.259449   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34329
	I0924 19:46:42.259667   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.260521   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.260996   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.261009   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.261374   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.261457   69576 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:46:42.261544   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.262754   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:46:42.262769   69576 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:46:42.262787   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.263351   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.263661   69576 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:46:42.263677   69576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:46:42.263691   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.266205   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.266653   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.266672   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.266974   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.267122   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.267234   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.267342   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.267589   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.267935   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.267951   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.268213   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.268331   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.268417   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.268562   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.408715   69576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:42.425635   69576 node_ready.go:35] waiting up to 6m0s for node "no-preload-965745" to be "Ready" ...
	I0924 19:46:40.944536   69904 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256242572s)
	I0924 19:46:40.944565   69904 crio.go:469] duration metric: took 2.25635162s to extract the tarball
	I0924 19:46:40.944574   69904 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:46:40.981609   69904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:41.019006   69904 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 19:46:41.019026   69904 cache_images.go:84] Images are preloaded, skipping loading
	I0924 19:46:41.019035   69904 kubeadm.go:934] updating node { 192.168.50.116 8444 v1.31.1 crio true true} ...
	I0924 19:46:41.019146   69904 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-093771 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:46:41.019233   69904 ssh_runner.go:195] Run: crio config
	I0924 19:46:41.064904   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:46:41.064927   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:41.064938   69904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:46:41.064957   69904 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.116 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-093771 NodeName:default-k8s-diff-port-093771 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:46:41.065089   69904 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.116
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-093771"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:46:41.065142   69904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:46:41.075518   69904 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:46:41.075604   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:46:41.084461   69904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0924 19:46:41.099383   69904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:46:41.114093   69904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0924 19:46:41.129287   69904 ssh_runner.go:195] Run: grep 192.168.50.116	control-plane.minikube.internal$ /etc/hosts
	I0924 19:46:41.132690   69904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:41.144620   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:41.258218   69904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:41.279350   69904 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771 for IP: 192.168.50.116
	I0924 19:46:41.279373   69904 certs.go:194] generating shared ca certs ...
	I0924 19:46:41.279393   69904 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:41.279592   69904 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:46:41.279668   69904 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:46:41.279685   69904 certs.go:256] generating profile certs ...
	I0924 19:46:41.279806   69904 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/client.key
	I0924 19:46:41.279905   69904 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.key.ee3880b0
	I0924 19:46:41.279968   69904 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.key
	I0924 19:46:41.280139   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:46:41.280176   69904 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:46:41.280189   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:46:41.280248   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:46:41.280292   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:46:41.280324   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:46:41.280379   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:41.281191   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:46:41.319225   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:46:41.343585   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:46:41.373080   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:46:41.405007   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0924 19:46:41.434543   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 19:46:41.458642   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:46:41.480848   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:46:41.502778   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:46:41.525217   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:46:41.548290   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:46:41.572569   69904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:46:41.591631   69904 ssh_runner.go:195] Run: openssl version
	I0924 19:46:41.598407   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:46:41.611310   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.616372   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.616425   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.621818   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:46:41.631262   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:46:41.641685   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.645781   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.645827   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.651168   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:46:41.664296   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:46:41.677001   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.681609   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.681650   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.686733   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:46:41.696235   69904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:46:41.700431   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:46:41.705979   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:46:41.711363   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:46:41.716911   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:46:41.722137   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:46:41.727363   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:46:41.732646   69904 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-093771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:46:41.732750   69904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:46:41.732791   69904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:41.766796   69904 cri.go:89] found id: ""
	I0924 19:46:41.766883   69904 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:46:41.776244   69904 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:46:41.776268   69904 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:46:41.776316   69904 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:46:41.786769   69904 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:46:41.787665   69904 kubeconfig.go:125] found "default-k8s-diff-port-093771" server: "https://192.168.50.116:8444"
	I0924 19:46:41.789591   69904 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:46:41.798561   69904 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.116
	I0924 19:46:41.798596   69904 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:46:41.798617   69904 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:46:41.798661   69904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:41.839392   69904 cri.go:89] found id: ""
	I0924 19:46:41.839469   69904 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:46:41.854464   69904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:46:41.863006   69904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:46:41.863023   69904 kubeadm.go:157] found existing configuration files:
	
	I0924 19:46:41.863082   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 19:46:41.871086   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:46:41.871138   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:46:41.880003   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 19:46:41.890123   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:46:41.890171   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:46:41.901736   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 19:46:41.909613   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:46:41.909670   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:46:41.921595   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 19:46:41.932589   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:46:41.932654   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:46:41.943735   69904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:46:41.952064   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:42.065934   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:42.948388   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.183687   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.264336   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.353897   69904 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:46:43.353979   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:43.854330   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:42.514864   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:46:42.533161   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:46:42.533181   69576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:46:42.539876   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:46:42.564401   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:46:42.564427   69576 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:46:42.598218   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:46:42.598243   69576 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:46:42.619014   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:46:44.487219   69576 node_ready.go:53] node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:45.026145   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.511239735s)
	I0924 19:46:45.026401   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.026416   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.026281   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.486373933s)
	I0924 19:46:45.026501   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.026514   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030099   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030118   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030151   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030162   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030166   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030171   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.030175   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030179   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030184   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.030192   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030494   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030544   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030562   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030634   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030662   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.041980   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.042007   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.042336   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.042391   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.042424   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.120637   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.501525022s)
	I0924 19:46:45.120699   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.120714   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.121114   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.121173   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.121197   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.121222   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.122653   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.122671   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.122683   69576 addons.go:475] Verifying addon metrics-server=true in "no-preload-965745"
	I0924 19:46:45.124698   69576 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 19:46:42.011562   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:42.011963   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:42.011986   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:42.011932   71118 retry.go:31] will retry after 1.827996528s: waiting for machine to come up
	I0924 19:46:43.841529   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:43.842075   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:43.842106   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:43.842030   71118 retry.go:31] will retry after 2.224896366s: waiting for machine to come up
	I0924 19:46:46.068290   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:46.068761   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:46.068784   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:46.068736   71118 retry.go:31] will retry after 2.630690322s: waiting for machine to come up
	I0924 19:46:45.126030   69576 addons.go:510] duration metric: took 2.930987175s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 19:46:46.930203   69576 node_ready.go:53] node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:44.354690   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:44.854316   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:45.354861   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:45.370596   69904 api_server.go:72] duration metric: took 2.016695722s to wait for apiserver process to appear ...
	I0924 19:46:45.370626   69904 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:46:45.370655   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:45.371182   69904 api_server.go:269] stopped: https://192.168.50.116:8444/healthz: Get "https://192.168.50.116:8444/healthz": dial tcp 192.168.50.116:8444: connect: connection refused
	I0924 19:46:45.870725   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.042928   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:48.042957   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:48.042985   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.054732   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:48.054759   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:48.371230   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.381025   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:48.381058   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:48.871669   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.878407   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:48.878440   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:49.371018   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:49.375917   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 200:
	ok
	I0924 19:46:49.383318   69904 api_server.go:141] control plane version: v1.31.1
	I0924 19:46:49.383352   69904 api_server.go:131] duration metric: took 4.012718503s to wait for apiserver health ...
	I0924 19:46:49.383362   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:46:49.383368   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:49.385326   69904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:46:48.700927   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:48.701338   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:48.701367   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:48.701291   71118 retry.go:31] will retry after 3.546152526s: waiting for machine to come up
	I0924 19:46:48.934204   69576 node_ready.go:49] node "no-preload-965745" has status "Ready":"True"
	I0924 19:46:48.934238   69576 node_ready.go:38] duration metric: took 6.508559983s for node "no-preload-965745" to be "Ready" ...
	I0924 19:46:48.934250   69576 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:48.941949   69576 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:48.947063   69576 pod_ready.go:93] pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:48.947094   69576 pod_ready.go:82] duration metric: took 5.112983ms for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:48.947106   69576 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.953349   69576 pod_ready.go:103] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:53.519204   69408 start.go:364] duration metric: took 49.813943111s to acquireMachinesLock for "embed-certs-311319"
	I0924 19:46:53.519255   69408 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:53.519264   69408 fix.go:54] fixHost starting: 
	I0924 19:46:53.519644   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:53.519688   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:53.536327   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0924 19:46:53.536874   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:53.537424   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:46:53.537449   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:53.537804   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:53.538009   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:46:53.538172   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:46:53.539842   69408 fix.go:112] recreateIfNeeded on embed-certs-311319: state=Stopped err=<nil>
	I0924 19:46:53.539866   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	W0924 19:46:53.540003   69408 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:53.541719   69408 out.go:177] * Restarting existing kvm2 VM for "embed-certs-311319" ...
	I0924 19:46:49.386740   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:46:49.398816   69904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:46:49.416805   69904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:46:49.428112   69904 system_pods.go:59] 8 kube-system pods found
	I0924 19:46:49.428153   69904 system_pods.go:61] "coredns-7c65d6cfc9-h4nm8" [621c3ebb-1eb3-47a4-ba87-68e9caa2f3f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:46:49.428175   69904 system_pods.go:61] "etcd-default-k8s-diff-port-093771" [4251f310-2a54-4473-91ba-0aa57247a8e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:46:49.428196   69904 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-093771" [13840d0f-dca8-4b9e-876f-e664bd2ec6e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:46:49.428210   69904 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-093771" [30bbbd4d-8609-47fd-9a9f-373a5b63d785] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:46:49.428220   69904 system_pods.go:61] "kube-proxy-4gx4g" [de627472-1155-4ce3-b910-15657e93988e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 19:46:49.428232   69904 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-093771" [b1edae56-d98a-4fc8-8a99-c6e27f485c91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:46:49.428244   69904 system_pods.go:61] "metrics-server-6867b74b74-rgcll" [11de5d03-9c99-4536-9cfd-b33fe2e11fae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:46:49.428256   69904 system_pods.go:61] "storage-provisioner" [3c29f75e-1570-42cd-8430-284527878197] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 19:46:49.428269   69904 system_pods.go:74] duration metric: took 11.441258ms to wait for pod list to return data ...
	I0924 19:46:49.428288   69904 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:46:49.432173   69904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:46:49.432198   69904 node_conditions.go:123] node cpu capacity is 2
	I0924 19:46:49.432207   69904 node_conditions.go:105] duration metric: took 3.913746ms to run NodePressure ...
	I0924 19:46:49.432221   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:49.707599   69904 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:46:49.712788   69904 kubeadm.go:739] kubelet initialised
	I0924 19:46:49.712808   69904 kubeadm.go:740] duration metric: took 5.18017ms waiting for restarted kubelet to initialise ...
	I0924 19:46:49.712816   69904 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:49.725245   69904 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.731600   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.731624   69904 pod_ready.go:82] duration metric: took 6.354998ms for pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.731633   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.731639   69904 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.737044   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.737067   69904 pod_ready.go:82] duration metric: took 5.419976ms for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.737083   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.737092   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.742151   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.742170   69904 pod_ready.go:82] duration metric: took 5.067452ms for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.742180   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.742185   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.823203   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.823237   69904 pod_ready.go:82] duration metric: took 81.044673ms for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.823253   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.823262   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4gx4g" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.220171   69904 pod_ready.go:93] pod "kube-proxy-4gx4g" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:50.220207   69904 pod_ready.go:82] duration metric: took 396.929531ms for pod "kube-proxy-4gx4g" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.220219   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:52.227683   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:52.249370   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.249921   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has current primary IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.249953   70152 main.go:141] libmachine: (old-k8s-version-510301) Found IP for machine: 192.168.72.81
	I0924 19:46:52.249967   70152 main.go:141] libmachine: (old-k8s-version-510301) Reserving static IP address...
	I0924 19:46:52.250395   70152 main.go:141] libmachine: (old-k8s-version-510301) Reserved static IP address: 192.168.72.81
	I0924 19:46:52.250438   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "old-k8s-version-510301", mac: "52:54:00:72:11:f0", ip: "192.168.72.81"} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.250453   70152 main.go:141] libmachine: (old-k8s-version-510301) Waiting for SSH to be available...
	I0924 19:46:52.250479   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | skip adding static IP to network mk-old-k8s-version-510301 - found existing host DHCP lease matching {name: "old-k8s-version-510301", mac: "52:54:00:72:11:f0", ip: "192.168.72.81"}
	I0924 19:46:52.250492   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Getting to WaitForSSH function...
	I0924 19:46:52.252807   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.253148   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.253176   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.253278   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using SSH client type: external
	I0924 19:46:52.253300   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa (-rw-------)
	I0924 19:46:52.253332   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:52.253345   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | About to run SSH command:
	I0924 19:46:52.253354   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | exit 0
	I0924 19:46:52.378625   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:52.379067   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetConfigRaw
	I0924 19:46:52.379793   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:52.382222   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.382618   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.382647   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.382925   70152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json ...
	I0924 19:46:52.383148   70152 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:52.383174   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:52.383374   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.385984   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.386434   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.386460   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.386614   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.386788   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.387002   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.387167   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.387396   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.387632   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.387645   70152 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:52.503003   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:52.503033   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.503320   70152 buildroot.go:166] provisioning hostname "old-k8s-version-510301"
	I0924 19:46:52.503344   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.503630   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.506502   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.506817   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.506858   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.507028   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.507216   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.507394   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.507584   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.507792   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.508016   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.508034   70152 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-510301 && echo "old-k8s-version-510301" | sudo tee /etc/hostname
	I0924 19:46:52.634014   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-510301
	
	I0924 19:46:52.634040   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.636807   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.637156   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.637186   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.637331   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.637528   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.637721   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.637866   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.638016   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.638228   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.638252   70152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-510301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-510301/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-510301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:52.754583   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:52.754613   70152 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:52.754645   70152 buildroot.go:174] setting up certificates
	I0924 19:46:52.754653   70152 provision.go:84] configureAuth start
	I0924 19:46:52.754664   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.754975   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:52.757674   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.758024   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.758047   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.758158   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.760405   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.760722   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.760751   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.760869   70152 provision.go:143] copyHostCerts
	I0924 19:46:52.760928   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:52.760942   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:52.761009   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:52.761125   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:52.761141   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:52.761180   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:52.761262   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:52.761274   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:52.761301   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:52.761375   70152 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-510301 san=[127.0.0.1 192.168.72.81 localhost minikube old-k8s-version-510301]
	I0924 19:46:52.906522   70152 provision.go:177] copyRemoteCerts
	I0924 19:46:52.906586   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:52.906606   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.909264   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.909580   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.909622   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.909777   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.909960   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.910206   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.910313   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:52.997129   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:53.020405   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 19:46:53.042194   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:53.063422   70152 provision.go:87] duration metric: took 308.753857ms to configureAuth
	I0924 19:46:53.063448   70152 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:53.063662   70152 config.go:182] Loaded profile config "old-k8s-version-510301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:46:53.063752   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.066435   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.066850   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.066877   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.067076   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.067247   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.067382   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.067546   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.067749   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:53.067935   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:53.067958   70152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:53.288436   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:53.288463   70152 machine.go:96] duration metric: took 905.298763ms to provisionDockerMachine
	I0924 19:46:53.288476   70152 start.go:293] postStartSetup for "old-k8s-version-510301" (driver="kvm2")
	I0924 19:46:53.288486   70152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:53.288513   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.288841   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:53.288869   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.291363   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.291643   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.291660   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.291867   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.292054   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.292210   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.292337   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.372984   70152 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:53.377049   70152 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:53.377072   70152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:53.377158   70152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:53.377250   70152 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:53.377339   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:53.385950   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:53.408609   70152 start.go:296] duration metric: took 120.112789ms for postStartSetup
	I0924 19:46:53.408654   70152 fix.go:56] duration metric: took 17.584988201s for fixHost
	I0924 19:46:53.408677   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.411723   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.412100   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.412124   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.412309   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.412544   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.412752   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.412892   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.413075   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:53.413260   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:53.413272   70152 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:53.519060   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207213.488062061
	
	I0924 19:46:53.519081   70152 fix.go:216] guest clock: 1727207213.488062061
	I0924 19:46:53.519090   70152 fix.go:229] Guest: 2024-09-24 19:46:53.488062061 +0000 UTC Remote: 2024-09-24 19:46:53.408658589 +0000 UTC m=+246.951196346 (delta=79.403472ms)
	I0924 19:46:53.519120   70152 fix.go:200] guest clock delta is within tolerance: 79.403472ms
	I0924 19:46:53.519127   70152 start.go:83] releasing machines lock for "old-k8s-version-510301", held for 17.695500754s
	I0924 19:46:53.519158   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.519439   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:53.522059   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.522454   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.522483   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.522639   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523144   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523344   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523432   70152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:53.523470   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.523577   70152 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:53.523614   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.526336   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.526804   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.526845   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.526874   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.527024   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.527216   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.527354   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.527358   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.527382   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.527484   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.527599   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.527742   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.527925   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.528073   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.625956   70152 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:53.631927   70152 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:53.769800   70152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:53.776028   70152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:53.776076   70152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:53.792442   70152 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:53.792476   70152 start.go:495] detecting cgroup driver to use...
	I0924 19:46:53.792558   70152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:53.813239   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:53.827951   70152 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:53.828011   70152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:53.840962   70152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:53.853498   70152 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:53.957380   70152 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:54.123019   70152 docker.go:233] disabling docker service ...
	I0924 19:46:54.123087   70152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:54.138033   70152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:54.153414   70152 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:54.286761   70152 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:54.411013   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:54.432184   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:54.449924   70152 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 19:46:54.450001   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.459689   70152 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:54.459745   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.469555   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.480875   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.490860   70152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:54.503933   70152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:54.513383   70152 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:54.513444   70152 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:54.527180   70152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:54.539778   70152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:54.676320   70152 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:54.774914   70152 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:54.775027   70152 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:54.780383   70152 start.go:563] Will wait 60s for crictl version
	I0924 19:46:54.780457   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:54.785066   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:54.825711   70152 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:54.825792   70152 ssh_runner.go:195] Run: crio --version
	I0924 19:46:54.861643   70152 ssh_runner.go:195] Run: crio --version
	I0924 19:46:54.905425   70152 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 19:46:53.542904   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Start
	I0924 19:46:53.543092   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring networks are active...
	I0924 19:46:53.543799   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring network default is active
	I0924 19:46:53.544155   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring network mk-embed-certs-311319 is active
	I0924 19:46:53.544586   69408 main.go:141] libmachine: (embed-certs-311319) Getting domain xml...
	I0924 19:46:53.545860   69408 main.go:141] libmachine: (embed-certs-311319) Creating domain...
	I0924 19:46:54.960285   69408 main.go:141] libmachine: (embed-certs-311319) Waiting to get IP...
	I0924 19:46:54.961237   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:54.961738   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:54.961831   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:54.961724   71297 retry.go:31] will retry after 193.067485ms: waiting for machine to come up
	I0924 19:46:55.156270   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:55.156850   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:55.156881   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:55.156806   71297 retry.go:31] will retry after 374.820173ms: waiting for machine to come up
	I0924 19:46:55.533606   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:55.534201   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:55.534235   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:55.534160   71297 retry.go:31] will retry after 469.993304ms: waiting for machine to come up
	I0924 19:46:56.005971   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:56.006513   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:56.006544   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:56.006471   71297 retry.go:31] will retry after 418.910837ms: waiting for machine to come up
	I0924 19:46:54.906585   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:54.909353   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:54.909736   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:54.909766   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:54.909970   70152 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:54.915290   70152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:54.927316   70152 kubeadm.go:883] updating cluster {Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:54.927427   70152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:46:54.927465   70152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:54.971020   70152 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:46:54.971090   70152 ssh_runner.go:195] Run: which lz4
	I0924 19:46:54.975775   70152 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:46:54.979807   70152 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:46:54.979865   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 19:46:56.372682   70152 crio.go:462] duration metric: took 1.396951861s to copy over tarball
	I0924 19:46:56.372750   70152 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:46:53.453495   69576 pod_ready.go:103] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:53.954341   69576 pod_ready.go:93] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.954366   69576 pod_ready.go:82] duration metric: took 5.007252183s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.954375   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.959461   69576 pod_ready.go:93] pod "kube-apiserver-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.959485   69576 pod_ready.go:82] duration metric: took 5.103045ms for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.959498   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.964289   69576 pod_ready.go:93] pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.964316   69576 pod_ready.go:82] duration metric: took 4.809404ms for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.964329   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.968263   69576 pod_ready.go:93] pod "kube-proxy-ng8vf" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.968286   69576 pod_ready.go:82] duration metric: took 3.947497ms for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.968296   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.971899   69576 pod_ready.go:93] pod "kube-scheduler-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.971916   69576 pod_ready.go:82] duration metric: took 3.613023ms for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.971924   69576 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:55.980226   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:54.728787   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:57.226216   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:59.227939   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:56.427214   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:56.427600   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:56.427638   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:56.427551   71297 retry.go:31] will retry after 631.22309ms: waiting for machine to come up
	I0924 19:46:57.059888   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:57.060269   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:57.060299   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:57.060219   71297 retry.go:31] will retry after 833.784855ms: waiting for machine to come up
	I0924 19:46:57.895228   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:57.895693   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:57.895711   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:57.895641   71297 retry.go:31] will retry after 1.12615573s: waiting for machine to come up
	I0924 19:46:59.023342   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:59.023824   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:59.023853   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:59.023770   71297 retry.go:31] will retry after 1.020351559s: waiting for machine to come up
	I0924 19:47:00.045373   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:00.045833   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:00.045860   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:00.045779   71297 retry.go:31] will retry after 1.127245815s: waiting for machine to come up
	I0924 19:46:59.298055   70152 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.925272101s)
	I0924 19:46:59.298082   70152 crio.go:469] duration metric: took 2.925375511s to extract the tarball
	I0924 19:46:59.298091   70152 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:46:59.340896   70152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:59.374335   70152 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:46:59.374358   70152 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 19:46:59.374431   70152 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:59.374463   70152 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.374468   70152 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.374489   70152 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.374514   70152 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.374434   70152 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.374582   70152 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.374624   70152 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 19:46:59.375796   70152 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.375857   70152 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.375925   70152 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.375869   70152 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.376062   70152 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.376154   70152 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:59.376357   70152 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.376419   70152 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 19:46:59.521289   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.525037   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.526549   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.536791   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.545312   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.553847   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 19:46:59.558387   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.611119   70152 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 19:46:59.611167   70152 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.611219   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.659190   70152 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 19:46:59.659234   70152 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.659282   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.660489   70152 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 19:46:59.660522   70152 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 19:46:59.660529   70152 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.660558   70152 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.660591   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.660596   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.686686   70152 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 19:46:59.686728   70152 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.686777   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698274   70152 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 19:46:59.698313   70152 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 19:46:59.698366   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698379   70152 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 19:46:59.698410   70152 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.698449   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698451   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.698462   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.698523   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.698527   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.698573   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.795169   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.795179   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.795201   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.805639   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.817474   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.817485   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.817538   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:46:59.917772   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.921025   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.929651   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.955330   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.955344   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.969966   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:46:59.969966   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:47:00.058059   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 19:47:00.058134   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 19:47:00.058178   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 19:47:00.078489   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 19:47:00.078543   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 19:47:00.091137   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:47:00.091212   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:47:00.132385   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 19:47:00.140154   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 19:47:00.328511   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:47:00.468550   70152 cache_images.go:92] duration metric: took 1.094174976s to LoadCachedImages
	W0924 19:47:00.468674   70152 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0924 19:47:00.468693   70152 kubeadm.go:934] updating node { 192.168.72.81 8443 v1.20.0 crio true true} ...
	I0924 19:47:00.468831   70152 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-510301 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:47:00.468918   70152 ssh_runner.go:195] Run: crio config
	I0924 19:47:00.521799   70152 cni.go:84] Creating CNI manager for ""
	I0924 19:47:00.521826   70152 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:00.521836   70152 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:47:00.521858   70152 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-510301 NodeName:old-k8s-version-510301 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 19:47:00.521992   70152 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-510301"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:47:00.522051   70152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 19:47:00.534799   70152 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:47:00.534888   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:47:00.546863   70152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0924 19:47:00.565623   70152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:47:00.583242   70152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0924 19:47:00.600113   70152 ssh_runner.go:195] Run: grep 192.168.72.81	control-plane.minikube.internal$ /etc/hosts
	I0924 19:47:00.603653   70152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:00.618699   70152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:00.746348   70152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:47:00.767201   70152 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301 for IP: 192.168.72.81
	I0924 19:47:00.767228   70152 certs.go:194] generating shared ca certs ...
	I0924 19:47:00.767246   70152 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:00.767418   70152 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:47:00.767468   70152 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:47:00.767482   70152 certs.go:256] generating profile certs ...
	I0924 19:47:00.767607   70152 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/client.key
	I0924 19:47:00.767675   70152 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key.32de9897
	I0924 19:47:00.767726   70152 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key
	I0924 19:47:00.767866   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:47:00.767903   70152 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:47:00.767916   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:47:00.767950   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:47:00.767980   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:47:00.768013   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:47:00.768064   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:00.768651   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:47:00.819295   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:47:00.858368   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:47:00.903694   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:47:00.930441   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 19:47:00.960346   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 19:47:00.988938   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:47:01.014165   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:47:01.038384   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:47:01.061430   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:47:01.083761   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:47:01.105996   70152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:47:01.121529   70152 ssh_runner.go:195] Run: openssl version
	I0924 19:47:01.127294   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:47:01.139547   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.143897   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.143956   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.149555   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:47:01.159823   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:47:01.170730   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.175500   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.175635   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.181445   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:47:01.194810   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:47:01.205193   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.209256   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.209316   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.214946   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:47:01.225368   70152 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:47:01.229833   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:47:01.235652   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:47:01.241158   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:47:01.248213   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:47:01.255001   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:47:01.262990   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:47:01.270069   70152 kubeadm.go:392] StartCluster: {Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:47:01.270166   70152 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:47:01.270211   70152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:01.310648   70152 cri.go:89] found id: ""
	I0924 19:47:01.310759   70152 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:47:01.321111   70152 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:47:01.321133   70152 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:47:01.321182   70152 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:47:01.330754   70152 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:47:01.331880   70152 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-510301" does not appear in /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:47:01.332435   70152 kubeconfig.go:62] /home/jenkins/minikube-integration/19700-3751/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-510301" cluster setting kubeconfig missing "old-k8s-version-510301" context setting]
	I0924 19:47:01.333336   70152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:01.390049   70152 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:47:01.402246   70152 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.81
	I0924 19:47:01.402281   70152 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:47:01.402295   70152 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:47:01.402346   70152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:01.443778   70152 cri.go:89] found id: ""
	I0924 19:47:01.443851   70152 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:47:01.459836   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:47:01.469392   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:47:01.469414   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:47:01.469454   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:47:01.480329   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:47:01.480402   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:47:01.489799   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:46:58.478282   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:00.478523   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:02.478757   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:01.400039   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:02.984025   69904 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:02.984060   69904 pod_ready.go:82] duration metric: took 12.763830222s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:02.984074   69904 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:01.175244   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:01.175766   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:01.175794   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:01.175728   71297 retry.go:31] will retry after 2.109444702s: waiting for machine to come up
	I0924 19:47:03.288172   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:03.288747   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:03.288815   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:03.288726   71297 retry.go:31] will retry after 1.856538316s: waiting for machine to come up
	I0924 19:47:05.147261   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:05.147676   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:05.147705   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:05.147631   71297 retry.go:31] will retry after 3.46026185s: waiting for machine to come up
	I0924 19:47:01.499967   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:47:01.500023   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:47:01.508842   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:47:01.517564   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:47:01.517620   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:47:01.527204   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:47:01.536656   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:47:01.536718   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:47:01.546282   70152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:47:01.555548   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:01.755130   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.379331   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.601177   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.739476   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.829258   70152 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:47:02.829347   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:03.330254   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:03.830452   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.329738   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.829469   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:05.329754   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:05.830117   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:06.329834   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.978616   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:07.478201   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:04.990988   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:07.489888   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:08.610127   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:08.610582   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:08.610609   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:08.610530   71297 retry.go:31] will retry after 3.91954304s: waiting for machine to come up
	I0924 19:47:06.830043   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:07.330209   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:07.830432   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:08.329603   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:08.829525   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.330455   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.830130   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:10.329475   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:10.829474   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:11.330269   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.977113   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:11.977305   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:09.490038   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:11.490626   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:13.990603   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:12.534647   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.535213   69408 main.go:141] libmachine: (embed-certs-311319) Found IP for machine: 192.168.61.21
	I0924 19:47:12.535249   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has current primary IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.535259   69408 main.go:141] libmachine: (embed-certs-311319) Reserving static IP address...
	I0924 19:47:12.535700   69408 main.go:141] libmachine: (embed-certs-311319) Reserved static IP address: 192.168.61.21
	I0924 19:47:12.535744   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "embed-certs-311319", mac: "52:54:00:2d:97:73", ip: "192.168.61.21"} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.535759   69408 main.go:141] libmachine: (embed-certs-311319) Waiting for SSH to be available...
	I0924 19:47:12.535820   69408 main.go:141] libmachine: (embed-certs-311319) DBG | skip adding static IP to network mk-embed-certs-311319 - found existing host DHCP lease matching {name: "embed-certs-311319", mac: "52:54:00:2d:97:73", ip: "192.168.61.21"}
	I0924 19:47:12.535851   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Getting to WaitForSSH function...
	I0924 19:47:12.538011   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.538313   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.538336   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.538473   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Using SSH client type: external
	I0924 19:47:12.538500   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa (-rw-------)
	I0924 19:47:12.538538   69408 main.go:141] libmachine: (embed-certs-311319) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:47:12.538558   69408 main.go:141] libmachine: (embed-certs-311319) DBG | About to run SSH command:
	I0924 19:47:12.538634   69408 main.go:141] libmachine: (embed-certs-311319) DBG | exit 0
	I0924 19:47:12.662787   69408 main.go:141] libmachine: (embed-certs-311319) DBG | SSH cmd err, output: <nil>: 
	I0924 19:47:12.663130   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetConfigRaw
	I0924 19:47:12.663829   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:12.666266   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.666707   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.666734   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.666985   69408 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/config.json ...
	I0924 19:47:12.667187   69408 machine.go:93] provisionDockerMachine start ...
	I0924 19:47:12.667205   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:12.667397   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.669695   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.670024   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.670056   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.670152   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.670297   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.670460   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.670624   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.670793   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.671018   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.671033   69408 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:47:12.766763   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:47:12.766797   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.767074   69408 buildroot.go:166] provisioning hostname "embed-certs-311319"
	I0924 19:47:12.767103   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.767285   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.770003   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.770519   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.770538   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.770705   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.770934   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.771119   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.771237   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.771408   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.771554   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.771565   69408 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-311319 && echo "embed-certs-311319" | sudo tee /etc/hostname
	I0924 19:47:12.879608   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-311319
	
	I0924 19:47:12.879636   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.882136   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.882424   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.882467   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.882663   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.882866   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.883075   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.883235   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.883416   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.883583   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.883599   69408 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-311319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-311319/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-311319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:47:12.987554   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:47:12.987586   69408 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:47:12.987608   69408 buildroot.go:174] setting up certificates
	I0924 19:47:12.987618   69408 provision.go:84] configureAuth start
	I0924 19:47:12.987630   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.987918   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:12.990946   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.991378   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.991399   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.991554   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.993829   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.994193   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.994222   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.994349   69408 provision.go:143] copyHostCerts
	I0924 19:47:12.994410   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:47:12.994420   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:47:12.994478   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:47:12.994576   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:47:12.994586   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:47:12.994609   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:47:12.994663   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:47:12.994670   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:47:12.994689   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:47:12.994734   69408 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.embed-certs-311319 san=[127.0.0.1 192.168.61.21 embed-certs-311319 localhost minikube]
	I0924 19:47:13.255351   69408 provision.go:177] copyRemoteCerts
	I0924 19:47:13.255425   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:47:13.255452   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.257888   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.258200   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.258229   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.258359   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.258567   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.258746   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.258895   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.337835   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:47:13.360866   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 19:47:13.382703   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 19:47:13.404887   69408 provision.go:87] duration metric: took 417.256101ms to configureAuth
	I0924 19:47:13.404918   69408 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:47:13.405088   69408 config.go:182] Loaded profile config "embed-certs-311319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:47:13.405156   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.407711   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.408005   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.408024   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.408215   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.408408   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.408558   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.408660   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.408798   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:13.408960   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:13.408975   69408 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:47:13.623776   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:47:13.623798   69408 machine.go:96] duration metric: took 956.599003ms to provisionDockerMachine
	I0924 19:47:13.623809   69408 start.go:293] postStartSetup for "embed-certs-311319" (driver="kvm2")
	I0924 19:47:13.623818   69408 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:47:13.623833   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.624139   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:47:13.624168   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.627101   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.627443   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.627463   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.627613   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.627790   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.627941   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.628087   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.705595   69408 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:47:13.709401   69408 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:47:13.709432   69408 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:47:13.709507   69408 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:47:13.709597   69408 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:47:13.709717   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:47:13.718508   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:13.741537   69408 start.go:296] duration metric: took 117.71568ms for postStartSetup
	I0924 19:47:13.741586   69408 fix.go:56] duration metric: took 20.222309525s for fixHost
	I0924 19:47:13.741609   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.743935   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.744298   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.744319   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.744478   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.744665   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.744833   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.744950   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.745099   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:13.745299   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:13.745310   69408 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:47:13.847189   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207233.821269327
	
	I0924 19:47:13.847206   69408 fix.go:216] guest clock: 1727207233.821269327
	I0924 19:47:13.847213   69408 fix.go:229] Guest: 2024-09-24 19:47:13.821269327 +0000 UTC Remote: 2024-09-24 19:47:13.741591139 +0000 UTC m=+352.627485562 (delta=79.678188ms)
	I0924 19:47:13.847230   69408 fix.go:200] guest clock delta is within tolerance: 79.678188ms
	I0924 19:47:13.847236   69408 start.go:83] releasing machines lock for "embed-certs-311319", held for 20.328002727s
	I0924 19:47:13.847252   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.847550   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:13.850207   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.850597   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.850624   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.850777   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851225   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851382   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851459   69408 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:47:13.851520   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.851583   69408 ssh_runner.go:195] Run: cat /version.json
	I0924 19:47:13.851606   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.854077   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854214   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854354   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.854378   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854508   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.854615   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.854646   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854666   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.854852   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.854855   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.855020   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.855030   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.855168   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.855279   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.927108   69408 ssh_runner.go:195] Run: systemctl --version
	I0924 19:47:13.948600   69408 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:47:14.091427   69408 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:47:14.097911   69408 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:47:14.097970   69408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:47:14.113345   69408 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:47:14.113367   69408 start.go:495] detecting cgroup driver to use...
	I0924 19:47:14.113418   69408 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:47:14.129953   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:47:14.143732   69408 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:47:14.143792   69408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:47:14.156986   69408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:47:14.170235   69408 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:47:14.280973   69408 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:47:14.431584   69408 docker.go:233] disabling docker service ...
	I0924 19:47:14.431652   69408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:47:14.447042   69408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:47:14.458811   69408 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:47:14.571325   69408 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:47:14.685951   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:47:14.698947   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:47:14.716153   69408 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:47:14.716210   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.725659   69408 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:47:14.725711   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.734814   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.744087   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.753666   69408 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:47:14.763166   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.772502   69408 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.787890   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.797483   69408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:47:14.805769   69408 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:47:14.805822   69408 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:47:14.817290   69408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:47:14.827023   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:14.954141   69408 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:47:15.033256   69408 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:47:15.033336   69408 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:47:15.038070   69408 start.go:563] Will wait 60s for crictl version
	I0924 19:47:15.038118   69408 ssh_runner.go:195] Run: which crictl
	I0924 19:47:15.041588   69408 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:47:15.081812   69408 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:47:15.081922   69408 ssh_runner.go:195] Run: crio --version
	I0924 19:47:15.108570   69408 ssh_runner.go:195] Run: crio --version
	I0924 19:47:15.137432   69408 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:47:15.138786   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:15.141328   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:15.141693   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:15.141723   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:15.141867   69408 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0924 19:47:15.145512   69408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:15.156995   69408 kubeadm.go:883] updating cluster {Name:embed-certs-311319 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:47:15.157095   69408 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:47:15.157142   69408 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:47:15.189861   69408 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:47:15.189919   69408 ssh_runner.go:195] Run: which lz4
	I0924 19:47:15.193364   69408 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:47:15.196961   69408 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:47:15.196986   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 19:47:11.830448   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:12.330373   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:12.830050   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.329571   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.829489   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:14.329728   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:14.829674   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:15.329673   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:15.829570   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:16.330102   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.978164   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:15.978363   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:15.990970   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:18.491272   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:16.371583   69408 crio.go:462] duration metric: took 1.178253814s to copy over tarball
	I0924 19:47:16.371663   69408 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:47:18.358246   69408 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.986557839s)
	I0924 19:47:18.358276   69408 crio.go:469] duration metric: took 1.986666343s to extract the tarball
	I0924 19:47:18.358285   69408 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:47:18.393855   69408 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:47:18.442985   69408 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 19:47:18.443011   69408 cache_images.go:84] Images are preloaded, skipping loading
	I0924 19:47:18.443020   69408 kubeadm.go:934] updating node { 192.168.61.21 8443 v1.31.1 crio true true} ...
	I0924 19:47:18.443144   69408 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-311319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:47:18.443225   69408 ssh_runner.go:195] Run: crio config
	I0924 19:47:18.495010   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:47:18.495034   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:18.495045   69408 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:47:18.495071   69408 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.21 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-311319 NodeName:embed-certs-311319 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:47:18.495201   69408 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-311319"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:47:18.495259   69408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:47:18.504758   69408 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:47:18.504837   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:47:18.513817   69408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 19:47:18.529890   69408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:47:18.545915   69408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0924 19:47:18.561627   69408 ssh_runner.go:195] Run: grep 192.168.61.21	control-plane.minikube.internal$ /etc/hosts
	I0924 19:47:18.565041   69408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:18.576059   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:18.686482   69408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:47:18.703044   69408 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319 for IP: 192.168.61.21
	I0924 19:47:18.703074   69408 certs.go:194] generating shared ca certs ...
	I0924 19:47:18.703095   69408 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:18.703278   69408 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:47:18.703317   69408 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:47:18.703327   69408 certs.go:256] generating profile certs ...
	I0924 19:47:18.703417   69408 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/client.key
	I0924 19:47:18.703477   69408 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.key.8f14491f
	I0924 19:47:18.703510   69408 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.key
	I0924 19:47:18.703649   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:47:18.703703   69408 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:47:18.703715   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:47:18.703740   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:47:18.703771   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:47:18.703803   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:47:18.703843   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:18.704668   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:47:18.731187   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:47:18.762416   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:47:18.793841   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:47:18.822091   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0924 19:47:18.854506   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 19:47:18.880416   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:47:18.903863   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 19:47:18.926078   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:47:18.947455   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:47:18.968237   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:47:18.990346   69408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:47:19.006286   69408 ssh_runner.go:195] Run: openssl version
	I0924 19:47:19.011968   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:47:19.021631   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.025859   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.025914   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.030999   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:47:19.041265   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:47:19.050994   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.054763   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.054810   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.059873   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:47:19.069694   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:47:19.079194   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.083185   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.083236   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.088369   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:47:19.098719   69408 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:47:19.102935   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:47:19.108364   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:47:19.113724   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:47:19.119556   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:47:19.125014   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:47:19.130466   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:47:19.135718   69408 kubeadm.go:392] StartCluster: {Name:embed-certs-311319 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:47:19.135786   69408 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:47:19.135826   69408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:19.171585   69408 cri.go:89] found id: ""
	I0924 19:47:19.171664   69408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:47:19.181296   69408 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:47:19.181315   69408 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:47:19.181363   69408 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:47:19.191113   69408 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:47:19.192148   69408 kubeconfig.go:125] found "embed-certs-311319" server: "https://192.168.61.21:8443"
	I0924 19:47:19.194115   69408 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:47:19.203274   69408 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.21
	I0924 19:47:19.203308   69408 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:47:19.203319   69408 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:47:19.203372   69408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:19.249594   69408 cri.go:89] found id: ""
	I0924 19:47:19.249678   69408 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:47:19.268296   69408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:47:19.277151   69408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:47:19.277169   69408 kubeadm.go:157] found existing configuration files:
	
	I0924 19:47:19.277206   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:47:19.285488   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:47:19.285550   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:47:19.294995   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:47:19.303613   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:47:19.303669   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:47:19.312919   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:47:19.321717   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:47:19.321778   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:47:19.330321   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:47:19.342441   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:47:19.342497   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:47:19.352505   69408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:47:19.361457   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:19.463310   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.242073   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.431443   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.500079   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.575802   69408 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:47:20.575904   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:21.076353   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:16.829867   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.329440   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.830132   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:18.329512   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:18.829524   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:19.329716   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:19.829496   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:20.329702   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:20.830155   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:21.330292   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.979442   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:20.478202   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:22.478336   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:20.491568   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:22.991057   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:21.576940   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.076696   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.576235   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.594920   69408 api_server.go:72] duration metric: took 2.019101558s to wait for apiserver process to appear ...
	I0924 19:47:22.594944   69408 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:47:22.594965   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:22.595379   69408 api_server.go:269] stopped: https://192.168.61.21:8443/healthz: Get "https://192.168.61.21:8443/healthz": dial tcp 192.168.61.21:8443: connect: connection refused
	I0924 19:47:23.095005   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.467947   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:47:25.467974   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:47:25.467988   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.515819   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:47:25.515851   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:47:25.596001   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.602276   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:25.602314   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:26.095918   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:26.100666   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:26.100698   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:21.829987   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.329630   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.830041   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:23.330430   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:23.829696   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:24.329494   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:24.830212   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:25.330402   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:25.829827   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:26.329541   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:26.595784   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:26.601821   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:26.601861   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:27.095137   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:27.099164   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 200:
	ok
	I0924 19:47:27.106625   69408 api_server.go:141] control plane version: v1.31.1
	I0924 19:47:27.106652   69408 api_server.go:131] duration metric: took 4.511701512s to wait for apiserver health ...
	I0924 19:47:27.106661   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:47:27.106668   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:27.108430   69408 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:47:24.479088   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:26.978509   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:25.490325   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:27.990308   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:27.109830   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:47:27.119442   69408 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:47:27.139119   69408 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:47:27.150029   69408 system_pods.go:59] 8 kube-system pods found
	I0924 19:47:27.150060   69408 system_pods.go:61] "coredns-7c65d6cfc9-wwzps" [5d53dda1-bd41-40f4-8e01-e3808a6e17e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:47:27.150067   69408 system_pods.go:61] "etcd-embed-certs-311319" [899d3105-b565-4c9c-8b8e-fa524ba8bee8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:47:27.150076   69408 system_pods.go:61] "kube-apiserver-embed-certs-311319" [45909a95-dafd-436a-b1c9-4b16a7cb6ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:47:27.150083   69408 system_pods.go:61] "kube-controller-manager-embed-certs-311319" [e122c12d-8ad6-472d-9339-a9751a6108a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:47:27.150089   69408 system_pods.go:61] "kube-proxy-qk749" [ae8c6989-5de4-41bd-9098-1924532b7ff8] Running
	I0924 19:47:27.150094   69408 system_pods.go:61] "kube-scheduler-embed-certs-311319" [2f7427ff-479c-4f36-b27f-cfbf76e26201] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:47:27.150103   69408 system_pods.go:61] "metrics-server-6867b74b74-jfrhm" [b0e8ee4e-c2c6-4379-85ca-805cd3ce6371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:47:27.150107   69408 system_pods.go:61] "storage-provisioner" [b61b6e53-23ad-4cee-8eaa-8195dc6e67b8] Running
	I0924 19:47:27.150115   69408 system_pods.go:74] duration metric: took 10.980516ms to wait for pod list to return data ...
	I0924 19:47:27.150123   69408 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:47:27.154040   69408 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:47:27.154061   69408 node_conditions.go:123] node cpu capacity is 2
	I0924 19:47:27.154070   69408 node_conditions.go:105] duration metric: took 3.94208ms to run NodePressure ...
	I0924 19:47:27.154083   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:27.413841   69408 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:47:27.419186   69408 kubeadm.go:739] kubelet initialised
	I0924 19:47:27.419208   69408 kubeadm.go:740] duration metric: took 5.345194ms waiting for restarted kubelet to initialise ...
	I0924 19:47:27.419217   69408 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:47:27.424725   69408 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.429510   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.429529   69408 pod_ready.go:82] duration metric: took 4.780829ms for pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.429537   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.429542   69408 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.434176   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "etcd-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.434200   69408 pod_ready.go:82] duration metric: took 4.647781ms for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.434211   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "etcd-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.434218   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.438323   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.438352   69408 pod_ready.go:82] duration metric: took 4.121619ms for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.438365   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.438377   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.543006   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.543032   69408 pod_ready.go:82] duration metric: took 104.641326ms for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.543046   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.543053   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qk749" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.942331   69408 pod_ready.go:93] pod "kube-proxy-qk749" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:27.942351   69408 pod_ready.go:82] duration metric: took 399.288777ms for pod "kube-proxy-qk749" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.942360   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:29.955819   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:26.830122   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:27.329632   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:27.829858   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:28.329762   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:28.829476   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.330221   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.829642   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:30.329491   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:30.830098   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:31.329499   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.479174   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:31.979161   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:30.490043   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:32.490237   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:32.447718   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:34.948011   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:35.948500   69408 pod_ready.go:93] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:35.948525   69408 pod_ready.go:82] duration metric: took 8.006158098s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:35.948534   69408 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:31.830201   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:32.330017   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:32.829654   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:33.329718   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:33.830007   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.329683   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.829441   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:35.329848   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:35.829899   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:36.330437   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.478344   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.979370   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:34.490525   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.493495   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:38.990185   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:37.955025   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:39.958725   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.830372   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:37.330124   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:37.829745   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:38.329476   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:38.830138   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.329657   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.829850   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:40.330083   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:40.829903   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:41.329650   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.478317   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:41.978220   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:40.990288   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:42.990812   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:42.455130   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:44.954001   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:41.829413   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:42.329658   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:42.829718   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:43.330413   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:43.830374   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.329633   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.829479   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:45.330059   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:45.829818   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:46.330216   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.478335   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.977745   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:45.489604   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:47.490196   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.954193   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:48.955025   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.830337   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:47.330269   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:47.829573   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:48.329440   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:48.829923   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.329742   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.829771   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:50.329793   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:50.829379   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:51.329385   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.477310   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.977800   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:49.990388   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:52.490087   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.453967   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:53.454464   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:55.454863   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.829989   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:52.329456   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:52.830395   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:53.330348   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:53.829385   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.329667   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.830290   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:55.330430   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:55.829909   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:56.330041   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.477481   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.978407   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:54.490209   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.989867   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:58.990813   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:57.954303   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:00.454466   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.829842   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:57.329904   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:57.829402   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:58.329848   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:58.830403   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.330062   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.829904   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:00.329651   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:00.829451   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:01.330427   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.479270   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.978099   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.490292   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:03.490598   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:02.955021   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:05.455302   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.830104   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:02.330085   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:02.830241   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:02.830313   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:02.863389   70152 cri.go:89] found id: ""
	I0924 19:48:02.863421   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.863432   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:02.863440   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:02.863501   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:02.903587   70152 cri.go:89] found id: ""
	I0924 19:48:02.903615   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.903627   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:02.903634   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:02.903691   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:02.936090   70152 cri.go:89] found id: ""
	I0924 19:48:02.936117   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.936132   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:02.936138   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:02.936197   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:02.970010   70152 cri.go:89] found id: ""
	I0924 19:48:02.970034   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.970042   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:02.970047   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:02.970094   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:03.005123   70152 cri.go:89] found id: ""
	I0924 19:48:03.005146   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.005156   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:03.005164   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:03.005224   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:03.037142   70152 cri.go:89] found id: ""
	I0924 19:48:03.037185   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.037214   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:03.037223   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:03.037289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:03.071574   70152 cri.go:89] found id: ""
	I0924 19:48:03.071605   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.071616   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:03.071644   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:03.071710   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:03.101682   70152 cri.go:89] found id: ""
	I0924 19:48:03.101710   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.101718   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:03.101727   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:03.101737   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:03.145955   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:03.145982   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:03.194495   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:03.194531   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:03.207309   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:03.207344   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:03.318709   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:03.318736   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:03.318751   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:05.897472   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:05.910569   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:05.910633   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:05.972008   70152 cri.go:89] found id: ""
	I0924 19:48:05.972047   70152 logs.go:276] 0 containers: []
	W0924 19:48:05.972059   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:05.972066   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:05.972128   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:06.021928   70152 cri.go:89] found id: ""
	I0924 19:48:06.021954   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.021961   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:06.021967   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:06.022018   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:06.054871   70152 cri.go:89] found id: ""
	I0924 19:48:06.054910   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.054919   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:06.054924   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:06.054979   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:06.087218   70152 cri.go:89] found id: ""
	I0924 19:48:06.087242   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.087253   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:06.087261   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:06.087312   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:06.120137   70152 cri.go:89] found id: ""
	I0924 19:48:06.120162   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.120170   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:06.120176   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:06.120222   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:06.150804   70152 cri.go:89] found id: ""
	I0924 19:48:06.150842   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.150854   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:06.150862   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:06.150911   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:06.189829   70152 cri.go:89] found id: ""
	I0924 19:48:06.189856   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.189864   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:06.189870   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:06.189920   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:06.224712   70152 cri.go:89] found id: ""
	I0924 19:48:06.224739   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.224747   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:06.224755   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:06.224769   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:06.290644   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:06.290669   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:06.290681   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:06.369393   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:06.369427   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:06.404570   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:06.404601   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:06.456259   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:06.456288   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:04.478140   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:06.478544   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:05.991344   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:08.489768   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:07.954351   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:10.453427   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:08.969378   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:08.982058   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:08.982129   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:09.015453   70152 cri.go:89] found id: ""
	I0924 19:48:09.015475   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.015484   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:09.015489   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:09.015535   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:09.046308   70152 cri.go:89] found id: ""
	I0924 19:48:09.046332   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.046343   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:09.046350   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:09.046412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:09.077263   70152 cri.go:89] found id: ""
	I0924 19:48:09.077296   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.077308   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:09.077315   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:09.077373   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:09.109224   70152 cri.go:89] found id: ""
	I0924 19:48:09.109255   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.109267   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:09.109274   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:09.109342   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:09.144346   70152 cri.go:89] found id: ""
	I0924 19:48:09.144370   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.144378   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:09.144383   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:09.144434   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:09.175798   70152 cri.go:89] found id: ""
	I0924 19:48:09.175827   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.175843   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:09.175854   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:09.175923   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:09.211912   70152 cri.go:89] found id: ""
	I0924 19:48:09.211935   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.211942   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:09.211948   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:09.211996   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:09.242068   70152 cri.go:89] found id: ""
	I0924 19:48:09.242099   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.242110   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:09.242121   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:09.242134   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:09.306677   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:09.306696   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:09.306707   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:09.384544   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:09.384598   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:09.419555   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:09.419583   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:09.470699   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:09.470731   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:08.977847   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:11.477629   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:10.491124   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:12.990300   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:12.455219   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:14.455548   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:11.984355   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:11.997823   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:11.997879   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:12.029976   70152 cri.go:89] found id: ""
	I0924 19:48:12.030009   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.030021   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:12.030041   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:12.030187   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:12.061131   70152 cri.go:89] found id: ""
	I0924 19:48:12.061157   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.061165   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:12.061170   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:12.061223   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:12.091952   70152 cri.go:89] found id: ""
	I0924 19:48:12.091978   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.091986   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:12.091992   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:12.092039   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:12.127561   70152 cri.go:89] found id: ""
	I0924 19:48:12.127586   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.127597   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:12.127604   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:12.127688   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:12.157342   70152 cri.go:89] found id: ""
	I0924 19:48:12.157363   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.157371   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:12.157377   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:12.157449   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:12.188059   70152 cri.go:89] found id: ""
	I0924 19:48:12.188090   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.188101   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:12.188109   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:12.188163   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:12.222357   70152 cri.go:89] found id: ""
	I0924 19:48:12.222380   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.222388   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:12.222398   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:12.222456   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:12.252715   70152 cri.go:89] found id: ""
	I0924 19:48:12.252736   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.252743   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:12.252751   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:12.252761   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:12.302913   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:12.302943   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:12.315812   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:12.315840   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:12.392300   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:12.392322   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:12.392333   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:12.475042   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:12.475081   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:15.013852   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:15.026515   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:15.026586   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:15.057967   70152 cri.go:89] found id: ""
	I0924 19:48:15.057993   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.058001   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:15.058008   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:15.058063   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:15.092822   70152 cri.go:89] found id: ""
	I0924 19:48:15.092852   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.092860   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:15.092866   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:15.092914   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:15.127847   70152 cri.go:89] found id: ""
	I0924 19:48:15.127875   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.127884   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:15.127889   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:15.127941   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:15.159941   70152 cri.go:89] found id: ""
	I0924 19:48:15.159967   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.159975   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:15.159981   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:15.160035   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:15.192384   70152 cri.go:89] found id: ""
	I0924 19:48:15.192411   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.192422   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:15.192428   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:15.192481   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:15.225446   70152 cri.go:89] found id: ""
	I0924 19:48:15.225472   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.225482   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:15.225488   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:15.225546   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:15.257292   70152 cri.go:89] found id: ""
	I0924 19:48:15.257312   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.257320   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:15.257326   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:15.257377   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:15.288039   70152 cri.go:89] found id: ""
	I0924 19:48:15.288073   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.288085   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:15.288096   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:15.288110   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:15.300593   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:15.300619   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:15.365453   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:15.365482   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:15.365497   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:15.442405   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:15.442440   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:15.481003   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:15.481033   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:13.978638   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.477631   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:14.990464   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.991280   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.954405   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:18.955055   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:18.031802   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:18.044013   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:18.044070   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:18.076333   70152 cri.go:89] found id: ""
	I0924 19:48:18.076357   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.076365   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:18.076371   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:18.076421   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:18.110333   70152 cri.go:89] found id: ""
	I0924 19:48:18.110367   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.110379   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:18.110386   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:18.110457   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:18.142730   70152 cri.go:89] found id: ""
	I0924 19:48:18.142755   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.142763   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:18.142769   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:18.142848   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:18.174527   70152 cri.go:89] found id: ""
	I0924 19:48:18.174551   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.174561   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:18.174568   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:18.174623   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:18.213873   70152 cri.go:89] found id: ""
	I0924 19:48:18.213904   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.213916   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:18.213923   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:18.214019   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:18.247037   70152 cri.go:89] found id: ""
	I0924 19:48:18.247069   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.247079   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:18.247087   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:18.247167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:18.278275   70152 cri.go:89] found id: ""
	I0924 19:48:18.278302   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.278313   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:18.278319   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:18.278377   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:18.311651   70152 cri.go:89] found id: ""
	I0924 19:48:18.311679   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.311690   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:18.311702   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:18.311714   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:18.365113   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:18.365144   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:18.378675   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:18.378702   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:18.450306   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:18.450339   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:18.450353   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:18.529373   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:18.529420   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:21.065169   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:21.077517   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:21.077579   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:21.112639   70152 cri.go:89] found id: ""
	I0924 19:48:21.112663   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.112671   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:21.112677   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:21.112729   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:21.144587   70152 cri.go:89] found id: ""
	I0924 19:48:21.144608   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.144616   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:21.144625   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:21.144675   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:21.175675   70152 cri.go:89] found id: ""
	I0924 19:48:21.175697   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.175705   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:21.175710   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:21.175760   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:21.207022   70152 cri.go:89] found id: ""
	I0924 19:48:21.207044   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.207053   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:21.207058   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:21.207108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:21.238075   70152 cri.go:89] found id: ""
	I0924 19:48:21.238106   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.238118   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:21.238125   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:21.238188   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:21.269998   70152 cri.go:89] found id: ""
	I0924 19:48:21.270030   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.270040   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:21.270048   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:21.270108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:21.301274   70152 cri.go:89] found id: ""
	I0924 19:48:21.301303   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.301315   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:21.301323   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:21.301389   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:21.332082   70152 cri.go:89] found id: ""
	I0924 19:48:21.332107   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.332115   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:21.332123   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:21.332133   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:21.383713   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:21.383759   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:21.396926   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:21.396950   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:21.465280   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:21.465306   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:21.465321   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:18.477865   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:20.978484   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:19.491021   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.993922   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.454663   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:23.455041   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:25.954094   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.544724   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:21.544760   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:24.083632   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:24.095853   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:24.095909   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:24.126692   70152 cri.go:89] found id: ""
	I0924 19:48:24.126718   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.126732   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:24.126739   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:24.126794   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:24.157451   70152 cri.go:89] found id: ""
	I0924 19:48:24.157478   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.157490   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:24.157498   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:24.157548   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:24.188313   70152 cri.go:89] found id: ""
	I0924 19:48:24.188340   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.188351   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:24.188359   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:24.188406   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:24.218240   70152 cri.go:89] found id: ""
	I0924 19:48:24.218271   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.218283   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:24.218291   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:24.218348   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:24.249281   70152 cri.go:89] found id: ""
	I0924 19:48:24.249313   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.249324   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:24.249331   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:24.249391   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:24.280160   70152 cri.go:89] found id: ""
	I0924 19:48:24.280182   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.280189   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:24.280194   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:24.280246   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:24.310699   70152 cri.go:89] found id: ""
	I0924 19:48:24.310726   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.310735   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:24.310740   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:24.310792   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:24.346673   70152 cri.go:89] found id: ""
	I0924 19:48:24.346703   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.346715   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:24.346725   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:24.346738   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:24.396068   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:24.396100   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:24.408987   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:24.409014   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:24.477766   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:24.477792   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:24.477805   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:24.556507   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:24.556539   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:23.477283   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:25.477770   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.478124   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:24.491040   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:26.990109   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.954634   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:29.954918   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.099161   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:27.110953   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:27.111027   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:27.143812   70152 cri.go:89] found id: ""
	I0924 19:48:27.143838   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.143846   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:27.143852   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:27.143909   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:27.173741   70152 cri.go:89] found id: ""
	I0924 19:48:27.173766   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.173775   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:27.173780   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:27.173835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:27.203089   70152 cri.go:89] found id: ""
	I0924 19:48:27.203118   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.203128   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:27.203135   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:27.203197   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:27.234206   70152 cri.go:89] found id: ""
	I0924 19:48:27.234232   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.234240   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:27.234247   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:27.234298   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:27.265173   70152 cri.go:89] found id: ""
	I0924 19:48:27.265199   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.265207   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:27.265213   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:27.265274   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:27.294683   70152 cri.go:89] found id: ""
	I0924 19:48:27.294711   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.294722   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:27.294737   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:27.294800   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:27.327766   70152 cri.go:89] found id: ""
	I0924 19:48:27.327796   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.327804   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:27.327810   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:27.327867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:27.358896   70152 cri.go:89] found id: ""
	I0924 19:48:27.358922   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.358932   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:27.358943   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:27.358958   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:27.407245   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:27.407281   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:27.420301   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:27.420333   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:27.483150   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:27.483175   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:27.483190   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:27.558952   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:27.558988   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:30.094672   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:30.107997   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:30.108061   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:30.141210   70152 cri.go:89] found id: ""
	I0924 19:48:30.141238   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.141248   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:30.141256   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:30.141319   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:30.173799   70152 cri.go:89] found id: ""
	I0924 19:48:30.173825   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.173833   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:30.173839   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:30.173900   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:30.206653   70152 cri.go:89] found id: ""
	I0924 19:48:30.206676   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.206684   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:30.206690   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:30.206739   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:30.245268   70152 cri.go:89] found id: ""
	I0924 19:48:30.245296   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.245351   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:30.245363   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:30.245424   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:30.277515   70152 cri.go:89] found id: ""
	I0924 19:48:30.277550   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.277570   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:30.277578   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:30.277646   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:30.309533   70152 cri.go:89] found id: ""
	I0924 19:48:30.309556   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.309564   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:30.309576   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:30.309641   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:30.342113   70152 cri.go:89] found id: ""
	I0924 19:48:30.342133   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.342140   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:30.342146   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:30.342204   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:30.377786   70152 cri.go:89] found id: ""
	I0924 19:48:30.377818   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.377827   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:30.377835   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:30.377846   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:30.429612   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:30.429660   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:30.442864   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:30.442892   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:30.508899   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:30.508917   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:30.508928   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:30.585285   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:30.585316   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:29.978453   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:32.478565   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:29.489398   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:31.490231   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:33.490730   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:32.454775   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:34.455023   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:33.125617   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:33.137771   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:33.137847   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:33.169654   70152 cri.go:89] found id: ""
	I0924 19:48:33.169684   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.169694   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:33.169703   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:33.169769   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:33.205853   70152 cri.go:89] found id: ""
	I0924 19:48:33.205877   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.205884   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:33.205890   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:33.205947   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:33.239008   70152 cri.go:89] found id: ""
	I0924 19:48:33.239037   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.239048   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:33.239056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:33.239114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:33.269045   70152 cri.go:89] found id: ""
	I0924 19:48:33.269077   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.269088   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:33.269096   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:33.269158   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:33.298553   70152 cri.go:89] found id: ""
	I0924 19:48:33.298583   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.298594   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:33.298602   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:33.298663   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:33.329077   70152 cri.go:89] found id: ""
	I0924 19:48:33.329103   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.329114   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:33.329122   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:33.329181   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:33.361366   70152 cri.go:89] found id: ""
	I0924 19:48:33.361397   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.361408   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:33.361416   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:33.361465   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:33.394899   70152 cri.go:89] found id: ""
	I0924 19:48:33.394941   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.394952   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:33.394964   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:33.394978   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:33.446878   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:33.446917   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:33.460382   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:33.460408   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:33.530526   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:33.530546   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:33.530563   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:33.610520   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:33.610559   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:36.152137   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:36.165157   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:36.165225   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:36.196113   70152 cri.go:89] found id: ""
	I0924 19:48:36.196142   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.196151   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:36.196159   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:36.196223   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:36.230743   70152 cri.go:89] found id: ""
	I0924 19:48:36.230770   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.230779   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:36.230786   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:36.230870   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:36.263401   70152 cri.go:89] found id: ""
	I0924 19:48:36.263430   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.263439   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:36.263444   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:36.263492   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:36.298958   70152 cri.go:89] found id: ""
	I0924 19:48:36.298982   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.298991   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:36.298996   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:36.299053   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:36.337604   70152 cri.go:89] found id: ""
	I0924 19:48:36.337636   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.337647   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:36.337654   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:36.337717   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:36.368707   70152 cri.go:89] found id: ""
	I0924 19:48:36.368738   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.368749   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:36.368763   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:36.368833   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:36.400169   70152 cri.go:89] found id: ""
	I0924 19:48:36.400194   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.400204   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:36.400212   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:36.400277   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:36.430959   70152 cri.go:89] found id: ""
	I0924 19:48:36.430987   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.430994   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:36.431003   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:36.431015   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:48:34.478813   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:36.978477   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:35.991034   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:38.489705   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:36.954351   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:39.455405   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	W0924 19:48:36.508356   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:36.508381   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:36.508392   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:36.589376   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:36.589411   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:36.629423   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:36.629453   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:36.679281   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:36.679313   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:39.193627   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:39.207486   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:39.207564   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:39.239864   70152 cri.go:89] found id: ""
	I0924 19:48:39.239888   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.239897   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:39.239902   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:39.239950   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:39.273596   70152 cri.go:89] found id: ""
	I0924 19:48:39.273622   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.273630   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:39.273635   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:39.273685   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:39.305659   70152 cri.go:89] found id: ""
	I0924 19:48:39.305685   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.305696   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:39.305703   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:39.305762   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:39.338060   70152 cri.go:89] found id: ""
	I0924 19:48:39.338091   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.338103   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:39.338110   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:39.338167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:39.369652   70152 cri.go:89] found id: ""
	I0924 19:48:39.369680   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.369688   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:39.369694   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:39.369757   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:39.406342   70152 cri.go:89] found id: ""
	I0924 19:48:39.406365   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.406373   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:39.406379   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:39.406428   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:39.437801   70152 cri.go:89] found id: ""
	I0924 19:48:39.437824   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.437832   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:39.437838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:39.437892   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:39.476627   70152 cri.go:89] found id: ""
	I0924 19:48:39.476651   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.476662   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:39.476672   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:39.476685   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:39.528302   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:39.528332   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:39.540968   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:39.540999   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:39.606690   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:39.606716   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:39.606733   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:39.689060   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:39.689101   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:39.478198   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:41.478531   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:40.489969   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:42.491022   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:41.954586   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:44.454898   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:42.225445   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:42.238188   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:42.238262   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:42.270077   70152 cri.go:89] found id: ""
	I0924 19:48:42.270107   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.270117   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:42.270127   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:42.270189   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:42.301231   70152 cri.go:89] found id: ""
	I0924 19:48:42.301253   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.301261   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:42.301266   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:42.301311   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:42.331554   70152 cri.go:89] found id: ""
	I0924 19:48:42.331586   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.331594   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:42.331602   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:42.331662   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:42.364673   70152 cri.go:89] found id: ""
	I0924 19:48:42.364696   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.364704   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:42.364710   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:42.364755   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:42.396290   70152 cri.go:89] found id: ""
	I0924 19:48:42.396320   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.396331   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:42.396339   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:42.396400   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:42.427249   70152 cri.go:89] found id: ""
	I0924 19:48:42.427277   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.427287   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:42.427295   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:42.427356   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:42.462466   70152 cri.go:89] found id: ""
	I0924 19:48:42.462491   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.462499   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:42.462504   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:42.462557   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:42.496774   70152 cri.go:89] found id: ""
	I0924 19:48:42.496797   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.496805   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:42.496813   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:42.496825   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:42.569996   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:42.570024   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:42.570040   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:42.646881   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:42.646913   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:42.687089   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:42.687112   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:42.739266   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:42.739303   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:45.254320   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:45.266332   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:45.266404   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:45.296893   70152 cri.go:89] found id: ""
	I0924 19:48:45.296923   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.296933   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:45.296940   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:45.297003   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:45.328599   70152 cri.go:89] found id: ""
	I0924 19:48:45.328628   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.328639   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:45.328647   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:45.328704   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:45.361362   70152 cri.go:89] found id: ""
	I0924 19:48:45.361394   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.361404   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:45.361414   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:45.361475   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:45.395296   70152 cri.go:89] found id: ""
	I0924 19:48:45.395341   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.395352   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:45.395360   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:45.395424   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:45.430070   70152 cri.go:89] found id: ""
	I0924 19:48:45.430092   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.430100   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:45.430106   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:45.430151   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:45.463979   70152 cri.go:89] found id: ""
	I0924 19:48:45.464005   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.464015   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:45.464023   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:45.464085   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:45.512245   70152 cri.go:89] found id: ""
	I0924 19:48:45.512276   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.512286   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:45.512293   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:45.512353   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:45.544854   70152 cri.go:89] found id: ""
	I0924 19:48:45.544882   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.544891   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:45.544902   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:45.544915   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:45.580352   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:45.580390   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:45.630992   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:45.631025   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:45.643908   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:45.643936   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:45.715669   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:45.715689   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:45.715703   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:43.478814   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:45.978275   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:44.990088   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:46.990498   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:46.954696   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:49.455032   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:48.296204   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:48.308612   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:48.308675   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:48.339308   70152 cri.go:89] found id: ""
	I0924 19:48:48.339335   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.339345   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:48.339353   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:48.339412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:48.377248   70152 cri.go:89] found id: ""
	I0924 19:48:48.377277   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.377286   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:48.377292   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:48.377354   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:48.414199   70152 cri.go:89] found id: ""
	I0924 19:48:48.414230   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.414238   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:48.414244   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:48.414293   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:48.446262   70152 cri.go:89] found id: ""
	I0924 19:48:48.446291   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.446302   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:48.446309   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:48.446369   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:48.477125   70152 cri.go:89] found id: ""
	I0924 19:48:48.477155   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.477166   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:48.477174   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:48.477233   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:48.520836   70152 cri.go:89] found id: ""
	I0924 19:48:48.520867   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.520876   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:48.520881   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:48.520936   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:48.557787   70152 cri.go:89] found id: ""
	I0924 19:48:48.557818   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.557829   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:48.557838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:48.557897   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:48.589636   70152 cri.go:89] found id: ""
	I0924 19:48:48.589670   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.589682   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:48.589692   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:48.589706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:48.667455   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:48.667486   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:48.704523   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:48.704559   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:48.754194   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:48.754223   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:48.766550   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:48.766576   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:48.833394   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:51.333900   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:51.347028   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:51.347094   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:51.383250   70152 cri.go:89] found id: ""
	I0924 19:48:51.383277   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.383285   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:51.383292   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:51.383356   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:51.415238   70152 cri.go:89] found id: ""
	I0924 19:48:51.415269   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.415282   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:51.415289   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:51.415349   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:51.447358   70152 cri.go:89] found id: ""
	I0924 19:48:51.447388   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.447398   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:51.447407   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:51.447469   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:51.479317   70152 cri.go:89] found id: ""
	I0924 19:48:51.479345   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.479354   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:51.479362   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:51.479423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:48.477928   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:50.978108   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:49.491597   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.989509   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:53.989629   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.954573   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:54.455024   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.511976   70152 cri.go:89] found id: ""
	I0924 19:48:51.512008   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.512016   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:51.512022   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:51.512074   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:51.544785   70152 cri.go:89] found id: ""
	I0924 19:48:51.544816   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.544824   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:51.544834   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:51.544896   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:51.577475   70152 cri.go:89] found id: ""
	I0924 19:48:51.577508   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.577519   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:51.577527   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:51.577599   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:51.612499   70152 cri.go:89] found id: ""
	I0924 19:48:51.612529   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.612540   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:51.612551   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:51.612564   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:51.648429   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:51.648456   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:51.699980   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:51.700010   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:51.714695   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:51.714723   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:51.781872   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:51.781894   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:51.781909   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:54.361191   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:54.373189   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:54.373242   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:54.405816   70152 cri.go:89] found id: ""
	I0924 19:48:54.405844   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.405854   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:54.405862   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:54.405924   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:54.437907   70152 cri.go:89] found id: ""
	I0924 19:48:54.437935   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.437945   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:54.437952   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:54.438013   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:54.472020   70152 cri.go:89] found id: ""
	I0924 19:48:54.472042   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.472054   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:54.472061   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:54.472122   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:54.507185   70152 cri.go:89] found id: ""
	I0924 19:48:54.507206   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.507215   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:54.507220   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:54.507269   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:54.540854   70152 cri.go:89] found id: ""
	I0924 19:48:54.540887   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.540898   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:54.540905   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:54.540973   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:54.572764   70152 cri.go:89] found id: ""
	I0924 19:48:54.572805   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.572816   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:54.572824   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:54.572897   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:54.605525   70152 cri.go:89] found id: ""
	I0924 19:48:54.605565   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.605573   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:54.605579   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:54.605652   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:54.637320   70152 cri.go:89] found id: ""
	I0924 19:48:54.637341   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.637350   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:54.637357   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:54.637367   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:54.691398   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:54.691433   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:54.704780   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:54.704805   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:54.779461   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:54.779487   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:54.779502   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:54.858131   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:54.858168   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:52.978487   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:55.477749   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:57.479091   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:55.989883   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:58.490132   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:56.954088   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:58.954576   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:00.955423   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:57.393677   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:57.406202   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:57.406273   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:57.439351   70152 cri.go:89] found id: ""
	I0924 19:48:57.439381   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.439388   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:57.439394   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:57.439440   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:57.476966   70152 cri.go:89] found id: ""
	I0924 19:48:57.476993   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.477002   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:57.477007   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:57.477064   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:57.510947   70152 cri.go:89] found id: ""
	I0924 19:48:57.510975   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.510986   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:57.510994   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:57.511054   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:57.544252   70152 cri.go:89] found id: ""
	I0924 19:48:57.544277   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.544285   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:57.544292   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:57.544342   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:57.576781   70152 cri.go:89] found id: ""
	I0924 19:48:57.576810   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.576821   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:57.576829   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:57.576892   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:57.614243   70152 cri.go:89] found id: ""
	I0924 19:48:57.614269   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.614277   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:57.614283   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:57.614349   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:57.653477   70152 cri.go:89] found id: ""
	I0924 19:48:57.653506   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.653517   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:57.653524   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:57.653598   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:57.701253   70152 cri.go:89] found id: ""
	I0924 19:48:57.701283   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.701291   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:57.701299   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:57.701311   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:57.721210   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:57.721239   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:57.799693   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:57.799720   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:57.799735   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:57.881561   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:57.881597   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:57.917473   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:57.917506   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:00.471475   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:00.485727   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:00.485801   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:00.518443   70152 cri.go:89] found id: ""
	I0924 19:49:00.518472   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.518483   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:00.518490   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:00.518555   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:00.553964   70152 cri.go:89] found id: ""
	I0924 19:49:00.553991   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.554001   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:00.554009   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:00.554074   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:00.585507   70152 cri.go:89] found id: ""
	I0924 19:49:00.585537   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.585548   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:00.585555   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:00.585614   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:00.618214   70152 cri.go:89] found id: ""
	I0924 19:49:00.618242   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.618253   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:00.618260   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:00.618319   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:00.649042   70152 cri.go:89] found id: ""
	I0924 19:49:00.649069   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.649077   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:00.649083   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:00.649133   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:00.681021   70152 cri.go:89] found id: ""
	I0924 19:49:00.681050   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.681060   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:00.681067   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:00.681128   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:00.712608   70152 cri.go:89] found id: ""
	I0924 19:49:00.712631   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.712640   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:00.712646   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:00.712693   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:00.744523   70152 cri.go:89] found id: ""
	I0924 19:49:00.744561   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.744572   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:00.744584   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:00.744604   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:00.757179   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:00.757202   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:00.822163   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:00.822186   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:00.822197   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:00.897080   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:00.897125   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:00.934120   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:00.934149   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:59.977468   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:01.978394   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:00.491533   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:02.990346   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:03.454971   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:05.954492   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:03.487555   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:03.500318   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:03.500372   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:03.531327   70152 cri.go:89] found id: ""
	I0924 19:49:03.531355   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.531364   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:03.531372   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:03.531437   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:03.563445   70152 cri.go:89] found id: ""
	I0924 19:49:03.563480   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.563491   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:03.563498   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:03.563564   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:03.602093   70152 cri.go:89] found id: ""
	I0924 19:49:03.602118   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.602126   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:03.602134   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:03.602184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:03.633729   70152 cri.go:89] found id: ""
	I0924 19:49:03.633758   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.633769   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:03.633777   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:03.633838   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:03.664122   70152 cri.go:89] found id: ""
	I0924 19:49:03.664144   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.664154   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:03.664162   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:03.664227   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:03.697619   70152 cri.go:89] found id: ""
	I0924 19:49:03.697647   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.697656   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:03.697661   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:03.697714   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:03.729679   70152 cri.go:89] found id: ""
	I0924 19:49:03.729706   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.729714   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:03.729719   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:03.729768   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:03.760459   70152 cri.go:89] found id: ""
	I0924 19:49:03.760489   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.760497   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:03.760505   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:03.760517   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:03.772452   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:03.772475   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:03.836658   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:03.836690   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:03.836706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:03.911243   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:03.911274   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:03.947676   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:03.947699   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:04.478117   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:06.977766   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:04.992137   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:07.490741   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:07.955747   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:10.453756   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:06.501947   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:06.513963   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:06.514037   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:06.546355   70152 cri.go:89] found id: ""
	I0924 19:49:06.546382   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.546393   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:06.546401   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:06.546460   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:06.577502   70152 cri.go:89] found id: ""
	I0924 19:49:06.577530   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.577542   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:06.577554   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:06.577606   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:06.611622   70152 cri.go:89] found id: ""
	I0924 19:49:06.611644   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.611652   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:06.611658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:06.611716   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:06.646558   70152 cri.go:89] found id: ""
	I0924 19:49:06.646581   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.646589   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:06.646594   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:06.646656   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:06.678247   70152 cri.go:89] found id: ""
	I0924 19:49:06.678271   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.678282   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:06.678289   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:06.678351   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:06.718816   70152 cri.go:89] found id: ""
	I0924 19:49:06.718861   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.718874   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:06.718889   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:06.718952   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:06.751762   70152 cri.go:89] found id: ""
	I0924 19:49:06.751787   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.751798   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:06.751806   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:06.751867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:06.783466   70152 cri.go:89] found id: ""
	I0924 19:49:06.783494   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.783502   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:06.783511   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:06.783523   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:06.796746   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:06.796773   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:06.860579   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:06.860608   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:06.860627   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:06.933363   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:06.933394   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:06.973189   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:06.973214   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:09.525823   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:09.537933   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:09.537986   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:09.568463   70152 cri.go:89] found id: ""
	I0924 19:49:09.568492   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.568503   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:09.568511   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:09.568566   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:09.598218   70152 cri.go:89] found id: ""
	I0924 19:49:09.598250   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.598261   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:09.598268   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:09.598325   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:09.631792   70152 cri.go:89] found id: ""
	I0924 19:49:09.631817   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.631828   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:09.631839   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:09.631906   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:09.668544   70152 cri.go:89] found id: ""
	I0924 19:49:09.668578   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.668586   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:09.668592   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:09.668643   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:09.699088   70152 cri.go:89] found id: ""
	I0924 19:49:09.699117   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.699126   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:09.699132   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:09.699192   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:09.731239   70152 cri.go:89] found id: ""
	I0924 19:49:09.731262   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.731273   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:09.731280   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:09.731341   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:09.764349   70152 cri.go:89] found id: ""
	I0924 19:49:09.764372   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.764380   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:09.764386   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:09.764443   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:09.795675   70152 cri.go:89] found id: ""
	I0924 19:49:09.795698   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.795707   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:09.795715   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:09.795733   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:09.829109   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:09.829133   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:09.882630   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:09.882666   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:09.894968   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:09.894992   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:09.955378   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:09.955400   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:09.955415   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:09.477323   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:11.477732   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:09.991122   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.490229   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.453790   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:14.454415   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.537431   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:12.549816   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:12.549878   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:12.585422   70152 cri.go:89] found id: ""
	I0924 19:49:12.585445   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.585453   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:12.585459   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:12.585505   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:12.621367   70152 cri.go:89] found id: ""
	I0924 19:49:12.621391   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.621401   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:12.621408   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:12.621471   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:12.656570   70152 cri.go:89] found id: ""
	I0924 19:49:12.656596   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.656603   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:12.656611   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:12.656671   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:12.691193   70152 cri.go:89] found id: ""
	I0924 19:49:12.691215   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.691225   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:12.691233   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:12.691291   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:12.725507   70152 cri.go:89] found id: ""
	I0924 19:49:12.725535   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.725546   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:12.725554   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:12.725614   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:12.757046   70152 cri.go:89] found id: ""
	I0924 19:49:12.757072   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.757083   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:12.757091   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:12.757148   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:12.787049   70152 cri.go:89] found id: ""
	I0924 19:49:12.787075   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.787083   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:12.787088   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:12.787136   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:12.820797   70152 cri.go:89] found id: ""
	I0924 19:49:12.820823   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.820831   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:12.820841   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:12.820859   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:12.873430   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:12.873462   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:12.886207   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:12.886234   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:12.957602   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:12.957623   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:12.957637   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:13.034776   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:13.034811   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:15.571177   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:15.583916   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:15.583981   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:15.618698   70152 cri.go:89] found id: ""
	I0924 19:49:15.618722   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.618730   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:15.618735   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:15.618787   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:15.653693   70152 cri.go:89] found id: ""
	I0924 19:49:15.653726   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.653747   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:15.653755   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:15.653817   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:15.683926   70152 cri.go:89] found id: ""
	I0924 19:49:15.683955   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.683966   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:15.683974   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:15.684031   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:15.718671   70152 cri.go:89] found id: ""
	I0924 19:49:15.718704   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.718716   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:15.718724   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:15.718784   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:15.748861   70152 cri.go:89] found id: ""
	I0924 19:49:15.748892   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.748904   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:15.748911   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:15.748985   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:15.778209   70152 cri.go:89] found id: ""
	I0924 19:49:15.778241   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.778252   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:15.778259   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:15.778323   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:15.808159   70152 cri.go:89] found id: ""
	I0924 19:49:15.808184   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.808192   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:15.808197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:15.808257   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:15.840960   70152 cri.go:89] found id: ""
	I0924 19:49:15.840987   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.840995   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:15.841003   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:15.841016   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:15.891229   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:15.891259   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:15.903910   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:15.903935   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:15.967036   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:15.967061   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:15.967074   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:16.046511   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:16.046545   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:13.477971   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:15.478378   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:14.990141   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:16.990237   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.990750   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:16.954729   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.954769   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.586369   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:18.598590   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:18.598680   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:18.631438   70152 cri.go:89] found id: ""
	I0924 19:49:18.631465   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.631476   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:18.631484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:18.631545   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:18.663461   70152 cri.go:89] found id: ""
	I0924 19:49:18.663484   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.663491   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:18.663497   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:18.663556   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:18.696292   70152 cri.go:89] found id: ""
	I0924 19:49:18.696373   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.696398   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:18.696411   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:18.696475   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:18.728037   70152 cri.go:89] found id: ""
	I0924 19:49:18.728062   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.728073   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:18.728079   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:18.728139   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:18.759784   70152 cri.go:89] found id: ""
	I0924 19:49:18.759819   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.759830   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:18.759838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:18.759902   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:18.791856   70152 cri.go:89] found id: ""
	I0924 19:49:18.791886   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.791893   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:18.791899   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:18.791959   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:18.822678   70152 cri.go:89] found id: ""
	I0924 19:49:18.822708   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.822719   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:18.822730   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:18.822794   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:18.852967   70152 cri.go:89] found id: ""
	I0924 19:49:18.852988   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.852996   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:18.853005   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:18.853016   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:18.902600   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:18.902634   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:18.915475   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:18.915505   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:18.980260   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:18.980285   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:18.980299   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:19.064950   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:19.064986   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:17.977250   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:19.977563   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.977702   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.490563   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:23.989915   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.454031   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:23.954281   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:25.955057   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.603752   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:21.616039   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:21.616107   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:21.648228   70152 cri.go:89] found id: ""
	I0924 19:49:21.648253   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.648266   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:21.648274   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:21.648331   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:21.679823   70152 cri.go:89] found id: ""
	I0924 19:49:21.679850   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.679858   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:21.679866   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:21.679928   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:21.712860   70152 cri.go:89] found id: ""
	I0924 19:49:21.712886   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.712895   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:21.712900   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:21.712951   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:21.749711   70152 cri.go:89] found id: ""
	I0924 19:49:21.749735   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.749742   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:21.749748   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:21.749793   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:21.784536   70152 cri.go:89] found id: ""
	I0924 19:49:21.784559   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.784567   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:21.784573   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:21.784631   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:21.813864   70152 cri.go:89] found id: ""
	I0924 19:49:21.813896   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.813907   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:21.813916   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:21.813981   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:21.843610   70152 cri.go:89] found id: ""
	I0924 19:49:21.843639   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.843647   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:21.843653   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:21.843704   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:21.874367   70152 cri.go:89] found id: ""
	I0924 19:49:21.874393   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.874401   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:21.874410   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:21.874421   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:21.923539   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:21.923567   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:21.936994   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:21.937018   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:22.004243   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:22.004264   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:22.004277   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:22.079890   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:22.079921   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:24.616140   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:24.628197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:24.628257   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:24.660873   70152 cri.go:89] found id: ""
	I0924 19:49:24.660902   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.660912   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:24.660919   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:24.660978   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:24.691592   70152 cri.go:89] found id: ""
	I0924 19:49:24.691618   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.691627   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:24.691633   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:24.691682   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:24.725803   70152 cri.go:89] found id: ""
	I0924 19:49:24.725835   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.725843   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:24.725849   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:24.725911   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:24.760080   70152 cri.go:89] found id: ""
	I0924 19:49:24.760112   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.760124   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:24.760131   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:24.760198   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:24.792487   70152 cri.go:89] found id: ""
	I0924 19:49:24.792517   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.792527   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:24.792535   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:24.792615   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:24.825037   70152 cri.go:89] found id: ""
	I0924 19:49:24.825058   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.825066   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:24.825072   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:24.825117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:24.857009   70152 cri.go:89] found id: ""
	I0924 19:49:24.857037   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.857048   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:24.857062   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:24.857119   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:24.887963   70152 cri.go:89] found id: ""
	I0924 19:49:24.887986   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.887994   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:24.888001   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:24.888012   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:24.941971   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:24.942008   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:24.956355   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:24.956385   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:25.020643   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:25.020671   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:25.020686   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:25.095261   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:25.095295   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:24.477423   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:26.477967   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:25.990406   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:28.490276   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:28.454466   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.955002   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:27.632228   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:27.645002   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:27.645059   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:27.677386   70152 cri.go:89] found id: ""
	I0924 19:49:27.677411   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.677420   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:27.677427   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:27.677487   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:27.709731   70152 cri.go:89] found id: ""
	I0924 19:49:27.709760   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.709771   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:27.709779   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:27.709846   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:27.741065   70152 cri.go:89] found id: ""
	I0924 19:49:27.741092   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.741100   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:27.741106   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:27.741165   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:27.771493   70152 cri.go:89] found id: ""
	I0924 19:49:27.771515   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.771524   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:27.771531   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:27.771592   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:27.803233   70152 cri.go:89] found id: ""
	I0924 19:49:27.803266   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.803273   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:27.803279   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:27.803341   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:27.837295   70152 cri.go:89] found id: ""
	I0924 19:49:27.837320   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.837331   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:27.837341   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:27.837412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:27.867289   70152 cri.go:89] found id: ""
	I0924 19:49:27.867314   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.867323   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:27.867328   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:27.867374   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:27.896590   70152 cri.go:89] found id: ""
	I0924 19:49:27.896615   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.896623   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:27.896634   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:27.896646   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:27.944564   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:27.944596   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:27.958719   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:27.958740   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:28.028986   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:28.029011   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:28.029027   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:28.103888   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:28.103920   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:30.639148   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:30.651500   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:30.651570   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:30.689449   70152 cri.go:89] found id: ""
	I0924 19:49:30.689472   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.689481   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:30.689488   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:30.689566   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:30.722953   70152 cri.go:89] found id: ""
	I0924 19:49:30.722982   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.722993   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:30.723004   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:30.723057   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:30.760960   70152 cri.go:89] found id: ""
	I0924 19:49:30.760985   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.760996   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:30.761004   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:30.761066   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:30.794784   70152 cri.go:89] found id: ""
	I0924 19:49:30.794812   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.794821   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:30.794842   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:30.794894   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:30.826127   70152 cri.go:89] found id: ""
	I0924 19:49:30.826155   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.826164   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:30.826172   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:30.826235   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:30.857392   70152 cri.go:89] found id: ""
	I0924 19:49:30.857422   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.857432   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:30.857446   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:30.857510   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:30.887561   70152 cri.go:89] found id: ""
	I0924 19:49:30.887588   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.887600   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:30.887622   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:30.887692   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:30.922486   70152 cri.go:89] found id: ""
	I0924 19:49:30.922514   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.922526   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:30.922537   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:30.922551   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:30.972454   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:30.972480   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:30.986873   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:30.986895   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:31.060505   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:31.060525   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:31.060544   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:31.138923   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:31.138955   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:28.977756   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.980419   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.989909   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:32.991815   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:33.454204   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.454890   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:33.674979   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:33.687073   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:33.687149   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:33.719712   70152 cri.go:89] found id: ""
	I0924 19:49:33.719742   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.719751   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:33.719757   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:33.719810   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:33.751183   70152 cri.go:89] found id: ""
	I0924 19:49:33.751210   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.751221   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:33.751229   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:33.751274   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:33.781748   70152 cri.go:89] found id: ""
	I0924 19:49:33.781781   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.781793   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:33.781801   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:33.781873   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:33.813287   70152 cri.go:89] found id: ""
	I0924 19:49:33.813311   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.813319   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:33.813324   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:33.813369   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:33.848270   70152 cri.go:89] found id: ""
	I0924 19:49:33.848299   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.848311   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:33.848319   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:33.848383   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:33.877790   70152 cri.go:89] found id: ""
	I0924 19:49:33.877817   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.877826   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:33.877832   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:33.877890   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:33.911668   70152 cri.go:89] found id: ""
	I0924 19:49:33.911693   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.911701   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:33.911706   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:33.911759   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:33.943924   70152 cri.go:89] found id: ""
	I0924 19:49:33.943952   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.943963   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:33.943974   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:33.943987   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:33.980520   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:33.980560   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:34.031240   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:34.031275   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:34.044180   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:34.044210   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:34.110143   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:34.110165   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:34.110176   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:33.477340   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.478344   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.490449   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:37.989317   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:37.954444   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:39.954569   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:36.694093   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:36.706006   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:36.706080   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:36.738955   70152 cri.go:89] found id: ""
	I0924 19:49:36.738981   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.738990   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:36.738995   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:36.739059   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:36.774414   70152 cri.go:89] found id: ""
	I0924 19:49:36.774437   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.774445   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:36.774451   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:36.774503   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:36.805821   70152 cri.go:89] found id: ""
	I0924 19:49:36.805851   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.805861   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:36.805867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:36.805922   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:36.835128   70152 cri.go:89] found id: ""
	I0924 19:49:36.835154   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.835162   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:36.835168   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:36.835221   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:36.865448   70152 cri.go:89] found id: ""
	I0924 19:49:36.865474   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.865485   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:36.865492   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:36.865552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:36.896694   70152 cri.go:89] found id: ""
	I0924 19:49:36.896722   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.896731   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:36.896736   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:36.896801   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:36.927380   70152 cri.go:89] found id: ""
	I0924 19:49:36.927406   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.927416   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:36.927426   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:36.927484   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:36.957581   70152 cri.go:89] found id: ""
	I0924 19:49:36.957604   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.957614   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:36.957624   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:36.957638   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:37.007182   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:37.007211   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:37.021536   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:37.021561   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:37.092442   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:37.092465   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:37.092477   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:37.167488   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:37.167524   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:39.703778   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:39.715914   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:39.715983   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:39.751296   70152 cri.go:89] found id: ""
	I0924 19:49:39.751319   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.751329   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:39.751341   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:39.751409   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:39.787095   70152 cri.go:89] found id: ""
	I0924 19:49:39.787123   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.787132   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:39.787137   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:39.787184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:39.822142   70152 cri.go:89] found id: ""
	I0924 19:49:39.822164   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.822173   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:39.822179   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:39.822226   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:39.853830   70152 cri.go:89] found id: ""
	I0924 19:49:39.853854   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.853864   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:39.853871   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:39.853932   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:39.891029   70152 cri.go:89] found id: ""
	I0924 19:49:39.891079   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.891091   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:39.891100   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:39.891162   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:39.926162   70152 cri.go:89] found id: ""
	I0924 19:49:39.926194   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.926204   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:39.926211   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:39.926262   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:39.964320   70152 cri.go:89] found id: ""
	I0924 19:49:39.964348   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.964358   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:39.964365   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:39.964421   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:39.997596   70152 cri.go:89] found id: ""
	I0924 19:49:39.997617   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.997627   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:39.997636   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:39.997649   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:40.045538   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:40.045568   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:40.058114   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:40.058139   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:40.125927   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:40.125946   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:40.125958   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:40.202722   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:40.202758   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:37.978393   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:40.476855   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.477425   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:39.990444   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:41.991094   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.454568   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:44.953805   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.742707   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:42.754910   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:42.754986   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:42.788775   70152 cri.go:89] found id: ""
	I0924 19:49:42.788798   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.788807   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:42.788813   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:42.788875   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:42.824396   70152 cri.go:89] found id: ""
	I0924 19:49:42.824420   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.824430   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:42.824436   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:42.824498   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:42.854848   70152 cri.go:89] found id: ""
	I0924 19:49:42.854873   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.854880   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:42.854886   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:42.854936   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:42.885033   70152 cri.go:89] found id: ""
	I0924 19:49:42.885056   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.885063   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:42.885069   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:42.885114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:42.914427   70152 cri.go:89] found id: ""
	I0924 19:49:42.914451   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.914458   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:42.914464   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:42.914509   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:42.954444   70152 cri.go:89] found id: ""
	I0924 19:49:42.954471   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.954481   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:42.954488   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:42.954544   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:42.998183   70152 cri.go:89] found id: ""
	I0924 19:49:42.998207   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.998215   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:42.998220   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:42.998273   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:43.041904   70152 cri.go:89] found id: ""
	I0924 19:49:43.041933   70152 logs.go:276] 0 containers: []
	W0924 19:49:43.041944   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:43.041957   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:43.041973   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:43.091733   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:43.091770   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:43.104674   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:43.104707   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:43.169712   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:43.169732   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:43.169745   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:43.248378   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:43.248409   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:45.790015   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:45.801902   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:45.801972   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:45.833030   70152 cri.go:89] found id: ""
	I0924 19:49:45.833053   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.833061   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:45.833066   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:45.833117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:45.863209   70152 cri.go:89] found id: ""
	I0924 19:49:45.863233   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.863241   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:45.863247   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:45.863307   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:45.893004   70152 cri.go:89] found id: ""
	I0924 19:49:45.893035   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.893045   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:45.893053   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:45.893114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:45.924485   70152 cri.go:89] found id: ""
	I0924 19:49:45.924515   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.924527   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:45.924535   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:45.924593   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:45.956880   70152 cri.go:89] found id: ""
	I0924 19:49:45.956907   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.956914   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:45.956919   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:45.956967   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:45.990579   70152 cri.go:89] found id: ""
	I0924 19:49:45.990602   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.990614   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:45.990622   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:45.990677   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:46.025905   70152 cri.go:89] found id: ""
	I0924 19:49:46.025944   70152 logs.go:276] 0 containers: []
	W0924 19:49:46.025959   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:46.025966   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:46.026028   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:46.057401   70152 cri.go:89] found id: ""
	I0924 19:49:46.057427   70152 logs.go:276] 0 containers: []
	W0924 19:49:46.057438   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:46.057449   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:46.057463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:46.107081   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:46.107115   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:46.121398   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:46.121426   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:46.184370   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:46.184395   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:46.184410   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:46.266061   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:46.266104   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:44.477907   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.478391   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:44.489995   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.989227   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.990995   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.953875   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.955013   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.803970   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:48.816671   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:48.816737   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:48.849566   70152 cri.go:89] found id: ""
	I0924 19:49:48.849628   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.849652   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:48.849660   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:48.849720   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:48.885963   70152 cri.go:89] found id: ""
	I0924 19:49:48.885992   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.885999   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:48.886004   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:48.886054   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:48.921710   70152 cri.go:89] found id: ""
	I0924 19:49:48.921744   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.921755   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:48.921765   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:48.921821   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:48.954602   70152 cri.go:89] found id: ""
	I0924 19:49:48.954639   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.954650   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:48.954658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:48.954718   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:48.988071   70152 cri.go:89] found id: ""
	I0924 19:49:48.988098   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.988109   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:48.988117   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:48.988177   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:49.020475   70152 cri.go:89] found id: ""
	I0924 19:49:49.020503   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.020512   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:49.020519   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:49.020597   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:49.055890   70152 cri.go:89] found id: ""
	I0924 19:49:49.055915   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.055925   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:49.055933   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:49.055999   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:49.092976   70152 cri.go:89] found id: ""
	I0924 19:49:49.093010   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.093022   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:49.093033   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:49.093051   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:49.106598   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:49.106623   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:49.175320   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:49.175349   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:49.175362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:49.252922   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:49.252953   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:49.292364   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:49.292391   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:48.977530   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:50.978078   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.489983   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:53.990114   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.454857   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:53.954413   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:55.955245   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.843520   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:51.855864   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:51.855930   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:51.885300   70152 cri.go:89] found id: ""
	I0924 19:49:51.885329   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.885342   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:51.885350   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:51.885407   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:51.915183   70152 cri.go:89] found id: ""
	I0924 19:49:51.915212   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.915223   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:51.915230   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:51.915286   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:51.944774   70152 cri.go:89] found id: ""
	I0924 19:49:51.944797   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.944807   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:51.944815   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:51.944886   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:51.983691   70152 cri.go:89] found id: ""
	I0924 19:49:51.983718   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.983729   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:51.983737   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:51.983791   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:52.019728   70152 cri.go:89] found id: ""
	I0924 19:49:52.019760   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.019770   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:52.019776   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:52.019835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:52.055405   70152 cri.go:89] found id: ""
	I0924 19:49:52.055435   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.055446   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:52.055453   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:52.055518   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:52.088417   70152 cri.go:89] found id: ""
	I0924 19:49:52.088447   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.088457   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:52.088465   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:52.088527   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:52.119496   70152 cri.go:89] found id: ""
	I0924 19:49:52.119527   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.119539   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:52.119550   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:52.119563   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:52.193494   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:52.193529   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:52.231440   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:52.231464   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:52.281384   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:52.281418   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:52.293893   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:52.293919   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:52.362404   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:54.863156   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:54.876871   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:54.876946   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:54.909444   70152 cri.go:89] found id: ""
	I0924 19:49:54.909471   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.909478   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:54.909484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:54.909536   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:54.939687   70152 cri.go:89] found id: ""
	I0924 19:49:54.939715   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.939726   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:54.939733   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:54.939805   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:54.971156   70152 cri.go:89] found id: ""
	I0924 19:49:54.971180   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.971188   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:54.971193   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:54.971244   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:55.001865   70152 cri.go:89] found id: ""
	I0924 19:49:55.001891   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.001899   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:55.001904   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:55.001961   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:55.032044   70152 cri.go:89] found id: ""
	I0924 19:49:55.032072   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.032084   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:55.032092   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:55.032152   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:55.061644   70152 cri.go:89] found id: ""
	I0924 19:49:55.061667   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.061675   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:55.061681   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:55.061727   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:55.093015   70152 cri.go:89] found id: ""
	I0924 19:49:55.093041   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.093049   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:55.093055   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:55.093121   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:55.126041   70152 cri.go:89] found id: ""
	I0924 19:49:55.126065   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.126073   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:55.126081   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:55.126091   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:55.168803   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:55.168826   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:55.227121   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:55.227158   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:55.249868   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:55.249893   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:55.316401   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:55.316422   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:55.316434   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:52.978705   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:55.478802   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:56.489685   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:58.990273   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:58.453854   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.954407   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:57.898654   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:57.910667   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:57.910728   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:57.942696   70152 cri.go:89] found id: ""
	I0924 19:49:57.942722   70152 logs.go:276] 0 containers: []
	W0924 19:49:57.942730   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:57.942736   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:57.942802   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:57.981222   70152 cri.go:89] found id: ""
	I0924 19:49:57.981244   70152 logs.go:276] 0 containers: []
	W0924 19:49:57.981254   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:57.981261   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:57.981308   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:58.013135   70152 cri.go:89] found id: ""
	I0924 19:49:58.013174   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.013185   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:58.013193   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:58.013255   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:58.048815   70152 cri.go:89] found id: ""
	I0924 19:49:58.048847   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.048859   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:58.048867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:58.048933   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:58.081365   70152 cri.go:89] found id: ""
	I0924 19:49:58.081395   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.081406   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:58.081413   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:58.081478   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:58.112804   70152 cri.go:89] found id: ""
	I0924 19:49:58.112828   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.112838   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:58.112848   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:58.112913   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:58.147412   70152 cri.go:89] found id: ""
	I0924 19:49:58.147448   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.147459   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:58.147467   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:58.147529   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:58.178922   70152 cri.go:89] found id: ""
	I0924 19:49:58.178952   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.178963   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:58.178974   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:58.178993   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:58.250967   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:58.250993   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:58.251011   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:58.329734   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:58.329767   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:58.366692   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:58.366722   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:58.418466   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:58.418503   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:00.931624   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:00.949687   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:00.949756   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:01.004428   70152 cri.go:89] found id: ""
	I0924 19:50:01.004456   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.004464   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:01.004471   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:01.004532   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:01.038024   70152 cri.go:89] found id: ""
	I0924 19:50:01.038050   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.038060   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:01.038065   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:01.038111   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:01.069831   70152 cri.go:89] found id: ""
	I0924 19:50:01.069855   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.069862   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:01.069867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:01.069933   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:01.100918   70152 cri.go:89] found id: ""
	I0924 19:50:01.100944   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.100951   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:01.100957   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:01.101006   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:01.131309   70152 cri.go:89] found id: ""
	I0924 19:50:01.131340   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.131351   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:01.131359   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:01.131419   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:01.161779   70152 cri.go:89] found id: ""
	I0924 19:50:01.161806   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.161817   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:01.161825   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:01.161888   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:01.196626   70152 cri.go:89] found id: ""
	I0924 19:50:01.196655   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.196665   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:01.196672   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:01.196733   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:01.226447   70152 cri.go:89] found id: ""
	I0924 19:50:01.226475   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.226486   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:01.226496   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:01.226510   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:01.279093   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:01.279121   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:01.292435   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:01.292463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:01.360868   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:01.360901   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:01.360917   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:01.442988   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:01.443021   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:57.978989   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.477211   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:02.477451   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.990593   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:03.489738   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:02.955427   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:05.455000   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:03.984021   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:03.997429   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:03.997508   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:04.030344   70152 cri.go:89] found id: ""
	I0924 19:50:04.030374   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.030387   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:04.030395   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:04.030448   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:04.063968   70152 cri.go:89] found id: ""
	I0924 19:50:04.064003   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.064016   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:04.064023   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:04.064083   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:04.097724   70152 cri.go:89] found id: ""
	I0924 19:50:04.097752   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.097764   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:04.097772   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:04.097825   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:04.129533   70152 cri.go:89] found id: ""
	I0924 19:50:04.129570   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.129580   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:04.129588   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:04.129665   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:04.166056   70152 cri.go:89] found id: ""
	I0924 19:50:04.166086   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.166098   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:04.166105   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:04.166164   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:04.200051   70152 cri.go:89] found id: ""
	I0924 19:50:04.200077   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.200087   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:04.200094   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:04.200205   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:04.232647   70152 cri.go:89] found id: ""
	I0924 19:50:04.232671   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.232679   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:04.232686   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:04.232744   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:04.264091   70152 cri.go:89] found id: ""
	I0924 19:50:04.264115   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.264123   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:04.264131   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:04.264140   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:04.313904   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:04.313939   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:04.326759   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:04.326782   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:04.390347   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:04.390372   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:04.390389   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:04.470473   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:04.470509   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:04.478092   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:06.976928   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:05.490259   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.490644   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.954747   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:10.455548   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.009267   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:07.022465   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:07.022534   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:07.053438   70152 cri.go:89] found id: ""
	I0924 19:50:07.053466   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.053476   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:07.053484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:07.053552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:07.085802   70152 cri.go:89] found id: ""
	I0924 19:50:07.085824   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.085833   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:07.085840   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:07.085903   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:07.121020   70152 cri.go:89] found id: ""
	I0924 19:50:07.121043   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.121051   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:07.121056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:07.121108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:07.150529   70152 cri.go:89] found id: ""
	I0924 19:50:07.150557   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.150568   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:07.150576   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:07.150663   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:07.181915   70152 cri.go:89] found id: ""
	I0924 19:50:07.181942   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.181953   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:07.181959   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:07.182021   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:07.215152   70152 cri.go:89] found id: ""
	I0924 19:50:07.215185   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.215195   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:07.215203   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:07.215263   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:07.248336   70152 cri.go:89] found id: ""
	I0924 19:50:07.248365   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.248373   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:07.248378   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:07.248423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:07.281829   70152 cri.go:89] found id: ""
	I0924 19:50:07.281854   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.281862   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:07.281871   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:07.281885   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:07.329674   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:07.329706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:07.342257   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:07.342283   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:07.406426   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:07.406452   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:07.406466   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:07.493765   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:07.493796   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:10.033393   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:10.046435   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:10.046513   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:10.077993   70152 cri.go:89] found id: ""
	I0924 19:50:10.078024   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.078034   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:10.078044   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:10.078108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:10.115200   70152 cri.go:89] found id: ""
	I0924 19:50:10.115232   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.115243   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:10.115251   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:10.115317   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:10.151154   70152 cri.go:89] found id: ""
	I0924 19:50:10.151179   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.151189   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:10.151197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:10.151254   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:10.184177   70152 cri.go:89] found id: ""
	I0924 19:50:10.184204   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.184212   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:10.184218   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:10.184268   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:10.218932   70152 cri.go:89] found id: ""
	I0924 19:50:10.218962   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.218973   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:10.218981   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:10.219042   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:10.250973   70152 cri.go:89] found id: ""
	I0924 19:50:10.251001   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.251012   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:10.251020   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:10.251076   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:10.280296   70152 cri.go:89] found id: ""
	I0924 19:50:10.280319   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.280328   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:10.280333   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:10.280385   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:10.312386   70152 cri.go:89] found id: ""
	I0924 19:50:10.312411   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.312419   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:10.312426   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:10.312437   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:10.377281   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:10.377309   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:10.377326   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:10.451806   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:10.451839   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:10.489154   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:10.489184   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:10.536203   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:10.536233   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:08.977378   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:10.977966   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:09.990141   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:11.990257   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:13.990360   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:12.954861   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.454763   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:13.049785   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:13.062642   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:13.062720   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:13.096627   70152 cri.go:89] found id: ""
	I0924 19:50:13.096658   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.096669   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:13.096680   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:13.096743   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:13.127361   70152 cri.go:89] found id: ""
	I0924 19:50:13.127389   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.127400   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:13.127409   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:13.127468   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:13.160081   70152 cri.go:89] found id: ""
	I0924 19:50:13.160111   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.160123   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:13.160131   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:13.160184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:13.192955   70152 cri.go:89] found id: ""
	I0924 19:50:13.192986   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.192997   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:13.193004   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:13.193057   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:13.230978   70152 cri.go:89] found id: ""
	I0924 19:50:13.231000   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.231008   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:13.231014   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:13.231064   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:13.262146   70152 cri.go:89] found id: ""
	I0924 19:50:13.262179   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.262190   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:13.262198   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:13.262258   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:13.297019   70152 cri.go:89] found id: ""
	I0924 19:50:13.297054   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.297063   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:13.297070   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:13.297117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:13.327009   70152 cri.go:89] found id: ""
	I0924 19:50:13.327037   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.327046   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:13.327057   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:13.327073   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:13.375465   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:13.375493   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:13.389851   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:13.389884   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:13.452486   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:13.452524   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:13.452538   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:13.531372   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:13.531405   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:16.066979   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:16.079767   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:16.079825   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:16.110927   70152 cri.go:89] found id: ""
	I0924 19:50:16.110951   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.110960   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:16.110965   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:16.111011   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:16.142012   70152 cri.go:89] found id: ""
	I0924 19:50:16.142040   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.142050   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:16.142055   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:16.142112   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:16.175039   70152 cri.go:89] found id: ""
	I0924 19:50:16.175068   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.175079   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:16.175086   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:16.175146   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:16.206778   70152 cri.go:89] found id: ""
	I0924 19:50:16.206800   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.206808   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:16.206814   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:16.206890   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:16.237724   70152 cri.go:89] found id: ""
	I0924 19:50:16.237752   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.237763   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:16.237770   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:16.237835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:16.268823   70152 cri.go:89] found id: ""
	I0924 19:50:16.268846   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.268855   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:16.268861   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:16.268931   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:16.301548   70152 cri.go:89] found id: ""
	I0924 19:50:16.301570   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.301578   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:16.301584   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:16.301635   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:16.334781   70152 cri.go:89] found id: ""
	I0924 19:50:16.334812   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.334820   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:16.334844   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:16.334864   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:16.384025   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:16.384057   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:16.396528   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:16.396556   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:16.460428   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:16.460458   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:16.460472   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:12.978203   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.477525   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.478192   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.990394   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.991181   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.955580   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:20.455446   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:16.541109   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:16.541146   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:19.078388   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:19.090964   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:19.091052   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:19.122890   70152 cri.go:89] found id: ""
	I0924 19:50:19.122915   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.122923   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:19.122928   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:19.122988   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:19.155983   70152 cri.go:89] found id: ""
	I0924 19:50:19.156013   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.156024   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:19.156031   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:19.156085   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:19.190366   70152 cri.go:89] found id: ""
	I0924 19:50:19.190389   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.190397   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:19.190403   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:19.190459   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:19.221713   70152 cri.go:89] found id: ""
	I0924 19:50:19.221737   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.221745   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:19.221751   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:19.221809   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:19.256586   70152 cri.go:89] found id: ""
	I0924 19:50:19.256615   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.256625   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:19.256637   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:19.256700   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:19.288092   70152 cri.go:89] found id: ""
	I0924 19:50:19.288119   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.288130   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:19.288141   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:19.288204   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:19.320743   70152 cri.go:89] found id: ""
	I0924 19:50:19.320771   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.320780   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:19.320785   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:19.320837   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:19.352967   70152 cri.go:89] found id: ""
	I0924 19:50:19.352999   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.353009   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:19.353019   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:19.353035   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:19.365690   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:19.365715   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:19.431204   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:19.431229   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:19.431244   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:19.512030   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:19.512063   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:19.549631   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:19.549664   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:19.977859   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:21.978267   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:20.489819   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.490667   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.954178   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:24.954267   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.105290   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:22.117532   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:22.117607   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:22.147959   70152 cri.go:89] found id: ""
	I0924 19:50:22.147983   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.147994   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:22.148002   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:22.148060   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:22.178511   70152 cri.go:89] found id: ""
	I0924 19:50:22.178540   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.178551   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:22.178556   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:22.178603   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:22.210030   70152 cri.go:89] found id: ""
	I0924 19:50:22.210054   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.210061   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:22.210067   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:22.210125   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:22.243010   70152 cri.go:89] found id: ""
	I0924 19:50:22.243037   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.243048   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:22.243056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:22.243117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:22.273021   70152 cri.go:89] found id: ""
	I0924 19:50:22.273051   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.273062   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:22.273069   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:22.273133   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:22.303372   70152 cri.go:89] found id: ""
	I0924 19:50:22.303403   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.303415   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:22.303422   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:22.303481   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:22.335124   70152 cri.go:89] found id: ""
	I0924 19:50:22.335150   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.335158   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:22.335164   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:22.335222   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:22.368230   70152 cri.go:89] found id: ""
	I0924 19:50:22.368255   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.368265   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:22.368276   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:22.368290   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:22.418998   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:22.419031   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:22.431654   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:22.431684   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:22.505336   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:22.505354   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:22.505367   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:22.584941   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:22.584976   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:25.127489   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:25.140142   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:25.140216   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:25.169946   70152 cri.go:89] found id: ""
	I0924 19:50:25.169974   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.169982   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:25.169988   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:25.170049   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:25.203298   70152 cri.go:89] found id: ""
	I0924 19:50:25.203328   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.203349   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:25.203357   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:25.203419   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:25.236902   70152 cri.go:89] found id: ""
	I0924 19:50:25.236930   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.236941   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:25.236949   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:25.237011   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:25.268295   70152 cri.go:89] found id: ""
	I0924 19:50:25.268318   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.268328   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:25.268333   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:25.268388   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:25.299869   70152 cri.go:89] found id: ""
	I0924 19:50:25.299899   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.299911   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:25.299919   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:25.299978   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:25.332373   70152 cri.go:89] found id: ""
	I0924 19:50:25.332400   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.332411   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:25.332418   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:25.332477   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:25.365791   70152 cri.go:89] found id: ""
	I0924 19:50:25.365820   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.365831   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:25.365839   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:25.365904   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:25.398170   70152 cri.go:89] found id: ""
	I0924 19:50:25.398193   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.398201   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:25.398209   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:25.398220   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:25.447933   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:25.447967   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:25.461244   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:25.461269   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:25.528100   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:25.528125   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:25.528138   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:25.603029   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:25.603062   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:24.477585   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:26.477776   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:24.491205   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:26.990562   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:27.454650   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:29.954657   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:28.141635   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:28.154551   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:28.154611   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:28.186275   70152 cri.go:89] found id: ""
	I0924 19:50:28.186299   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.186307   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:28.186312   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:28.186371   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:28.218840   70152 cri.go:89] found id: ""
	I0924 19:50:28.218868   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.218879   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:28.218887   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:28.218955   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:28.253478   70152 cri.go:89] found id: ""
	I0924 19:50:28.253503   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.253512   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:28.253519   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:28.253579   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:28.284854   70152 cri.go:89] found id: ""
	I0924 19:50:28.284888   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.284899   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:28.284908   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:28.284959   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:28.315453   70152 cri.go:89] found id: ""
	I0924 19:50:28.315478   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.315487   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:28.315500   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:28.315550   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:28.347455   70152 cri.go:89] found id: ""
	I0924 19:50:28.347484   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.347492   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:28.347498   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:28.347552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:28.383651   70152 cri.go:89] found id: ""
	I0924 19:50:28.383683   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.383694   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:28.383702   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:28.383766   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:28.424649   70152 cri.go:89] found id: ""
	I0924 19:50:28.424682   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.424693   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:28.424704   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:28.424718   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:28.477985   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:28.478020   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:28.490902   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:28.490930   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:28.561252   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:28.561273   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:28.561284   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:28.635590   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:28.635635   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:31.172062   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:31.184868   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:31.184939   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:31.216419   70152 cri.go:89] found id: ""
	I0924 19:50:31.216446   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.216456   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:31.216464   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:31.216525   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:31.252757   70152 cri.go:89] found id: ""
	I0924 19:50:31.252787   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.252797   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:31.252804   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:31.252867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:31.287792   70152 cri.go:89] found id: ""
	I0924 19:50:31.287820   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.287827   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:31.287833   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:31.287883   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:31.322891   70152 cri.go:89] found id: ""
	I0924 19:50:31.322917   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.322927   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:31.322934   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:31.322997   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:31.358353   70152 cri.go:89] found id: ""
	I0924 19:50:31.358384   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.358394   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:31.358401   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:31.358461   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:31.388617   70152 cri.go:89] found id: ""
	I0924 19:50:31.388643   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.388654   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:31.388661   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:31.388714   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:31.421655   70152 cri.go:89] found id: ""
	I0924 19:50:31.421682   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.421690   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:31.421695   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:31.421747   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:31.456995   70152 cri.go:89] found id: ""
	I0924 19:50:31.457020   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.457029   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:31.457037   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:31.457048   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:28.478052   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:30.977483   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:29.490310   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:31.990052   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:33.991439   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:32.454421   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:34.456333   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:31.507691   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:31.507725   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:31.521553   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:31.521582   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:31.587673   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:31.587695   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:31.587710   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:31.674153   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:31.674193   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:34.213947   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:34.227779   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:34.227852   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:34.265513   70152 cri.go:89] found id: ""
	I0924 19:50:34.265541   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.265568   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:34.265575   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:34.265632   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:34.305317   70152 cri.go:89] found id: ""
	I0924 19:50:34.305340   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.305348   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:34.305354   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:34.305402   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:34.341144   70152 cri.go:89] found id: ""
	I0924 19:50:34.341168   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.341176   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:34.341183   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:34.341232   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:34.372469   70152 cri.go:89] found id: ""
	I0924 19:50:34.372491   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.372499   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:34.372505   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:34.372551   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:34.408329   70152 cri.go:89] found id: ""
	I0924 19:50:34.408351   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.408360   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:34.408365   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:34.408423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:34.440666   70152 cri.go:89] found id: ""
	I0924 19:50:34.440695   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.440707   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:34.440714   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:34.440782   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:34.475013   70152 cri.go:89] found id: ""
	I0924 19:50:34.475040   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.475047   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:34.475053   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:34.475105   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:34.507051   70152 cri.go:89] found id: ""
	I0924 19:50:34.507077   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.507084   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:34.507092   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:34.507102   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:34.562506   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:34.562549   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:34.575316   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:34.575340   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:34.641903   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:34.641927   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:34.641938   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:34.719868   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:34.719903   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:32.978271   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:35.477581   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:37.479350   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:36.490263   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:38.490795   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:36.953906   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:38.955474   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:37.279465   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:37.291991   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:37.292065   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:37.322097   70152 cri.go:89] found id: ""
	I0924 19:50:37.322123   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.322134   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:37.322141   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:37.322199   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:37.353697   70152 cri.go:89] found id: ""
	I0924 19:50:37.353729   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.353740   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:37.353748   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:37.353807   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:37.385622   70152 cri.go:89] found id: ""
	I0924 19:50:37.385653   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.385664   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:37.385672   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:37.385735   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:37.420972   70152 cri.go:89] found id: ""
	I0924 19:50:37.420995   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.421004   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:37.421012   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:37.421070   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:37.451496   70152 cri.go:89] found id: ""
	I0924 19:50:37.451523   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.451534   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:37.451541   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:37.451619   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:37.486954   70152 cri.go:89] found id: ""
	I0924 19:50:37.486982   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.486992   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:37.487000   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:37.487061   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:37.523068   70152 cri.go:89] found id: ""
	I0924 19:50:37.523089   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.523097   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:37.523105   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:37.523165   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:37.559935   70152 cri.go:89] found id: ""
	I0924 19:50:37.559962   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.559970   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:37.559978   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:37.559988   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:37.597976   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:37.598006   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:37.647577   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:37.647610   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:37.660872   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:37.660901   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:37.728264   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:37.728293   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:37.728307   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:40.308026   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:40.320316   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:40.320373   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:40.357099   70152 cri.go:89] found id: ""
	I0924 19:50:40.357127   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.357137   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:40.357145   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:40.357207   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:40.390676   70152 cri.go:89] found id: ""
	I0924 19:50:40.390703   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.390712   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:40.390717   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:40.390766   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:40.422752   70152 cri.go:89] found id: ""
	I0924 19:50:40.422784   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.422796   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:40.422804   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:40.422887   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:40.457024   70152 cri.go:89] found id: ""
	I0924 19:50:40.457046   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.457054   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:40.457059   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:40.457106   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:40.503120   70152 cri.go:89] found id: ""
	I0924 19:50:40.503149   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.503160   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:40.503168   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:40.503225   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:40.543399   70152 cri.go:89] found id: ""
	I0924 19:50:40.543426   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.543435   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:40.543441   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:40.543487   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:40.577654   70152 cri.go:89] found id: ""
	I0924 19:50:40.577679   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.577690   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:40.577698   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:40.577754   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:40.610097   70152 cri.go:89] found id: ""
	I0924 19:50:40.610120   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.610128   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:40.610136   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:40.610145   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:40.661400   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:40.661436   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:40.674254   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:40.674284   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:40.740319   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:40.740342   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:40.740352   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:40.818666   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:40.818704   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:39.979184   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:41.981561   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:40.491417   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:42.991420   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:41.454480   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:43.456158   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:45.955070   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:43.356693   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:43.369234   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:43.369295   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:43.407933   70152 cri.go:89] found id: ""
	I0924 19:50:43.407960   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.407971   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:43.407978   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:43.408037   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:43.442923   70152 cri.go:89] found id: ""
	I0924 19:50:43.442956   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.442968   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:43.442979   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:43.443029   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:43.478148   70152 cri.go:89] found id: ""
	I0924 19:50:43.478177   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.478189   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:43.478197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:43.478256   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:43.515029   70152 cri.go:89] found id: ""
	I0924 19:50:43.515060   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.515071   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:43.515079   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:43.515144   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:43.551026   70152 cri.go:89] found id: ""
	I0924 19:50:43.551058   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.551070   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:43.551077   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:43.551140   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:43.587155   70152 cri.go:89] found id: ""
	I0924 19:50:43.587188   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.587197   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:43.587205   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:43.587263   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:43.620935   70152 cri.go:89] found id: ""
	I0924 19:50:43.620958   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.620976   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:43.620984   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:43.621045   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:43.654477   70152 cri.go:89] found id: ""
	I0924 19:50:43.654512   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.654523   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:43.654534   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:43.654546   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:43.689352   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:43.689385   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:43.742646   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:43.742683   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:43.755773   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:43.755798   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:43.818546   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:43.818577   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:43.818595   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:46.397466   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:46.410320   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:46.410392   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:46.443003   70152 cri.go:89] found id: ""
	I0924 19:50:46.443029   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.443041   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:46.443049   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:46.443114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:46.484239   70152 cri.go:89] found id: ""
	I0924 19:50:46.484264   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.484274   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:46.484282   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:46.484339   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:43.981787   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:46.478489   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:45.489723   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:47.491171   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:47.955545   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:50.454211   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:46.519192   70152 cri.go:89] found id: ""
	I0924 19:50:46.519221   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.519230   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:46.519236   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:46.519286   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:46.554588   70152 cri.go:89] found id: ""
	I0924 19:50:46.554611   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.554619   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:46.554626   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:46.554685   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:46.586074   70152 cri.go:89] found id: ""
	I0924 19:50:46.586101   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.586110   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:46.586116   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:46.586167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:46.620119   70152 cri.go:89] found id: ""
	I0924 19:50:46.620149   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.620159   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:46.620166   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:46.620226   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:46.653447   70152 cri.go:89] found id: ""
	I0924 19:50:46.653477   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.653488   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:46.653495   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:46.653557   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:46.686079   70152 cri.go:89] found id: ""
	I0924 19:50:46.686105   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.686116   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:46.686127   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:46.686140   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:46.699847   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:46.699891   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:46.766407   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:46.766432   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:46.766447   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:46.846697   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:46.846730   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:46.901551   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:46.901578   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:49.460047   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:49.473516   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:49.473586   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:49.508180   70152 cri.go:89] found id: ""
	I0924 19:50:49.508211   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.508220   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:49.508226   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:49.508289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:49.540891   70152 cri.go:89] found id: ""
	I0924 19:50:49.540920   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.540928   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:49.540934   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:49.540984   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:49.577008   70152 cri.go:89] found id: ""
	I0924 19:50:49.577038   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.577048   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:49.577054   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:49.577132   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:49.615176   70152 cri.go:89] found id: ""
	I0924 19:50:49.615206   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.615216   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:49.615226   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:49.615289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:49.653135   70152 cri.go:89] found id: ""
	I0924 19:50:49.653167   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.653177   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:49.653184   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:49.653250   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:49.691032   70152 cri.go:89] found id: ""
	I0924 19:50:49.691064   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.691074   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:49.691080   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:49.691143   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:49.725243   70152 cri.go:89] found id: ""
	I0924 19:50:49.725274   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.725287   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:49.725294   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:49.725363   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:49.759288   70152 cri.go:89] found id: ""
	I0924 19:50:49.759316   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.759325   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:49.759333   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:49.759345   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:49.831323   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:49.831345   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:49.831362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:49.907302   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:49.907336   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:49.946386   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:49.946424   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:50.002321   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:50.002362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:48.978153   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:51.477442   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:49.991214   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.490034   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.454585   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:54.455120   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.517380   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:52.531613   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:52.531671   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:52.568158   70152 cri.go:89] found id: ""
	I0924 19:50:52.568188   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.568199   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:52.568207   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:52.568258   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:52.606203   70152 cri.go:89] found id: ""
	I0924 19:50:52.606232   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.606241   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:52.606247   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:52.606307   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:52.647180   70152 cri.go:89] found id: ""
	I0924 19:50:52.647206   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.647218   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:52.647226   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:52.647290   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:52.692260   70152 cri.go:89] found id: ""
	I0924 19:50:52.692289   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.692308   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:52.692316   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:52.692382   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:52.745648   70152 cri.go:89] found id: ""
	I0924 19:50:52.745673   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.745684   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:52.745693   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:52.745759   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:52.782429   70152 cri.go:89] found id: ""
	I0924 19:50:52.782451   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.782458   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:52.782463   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:52.782510   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:52.817286   70152 cri.go:89] found id: ""
	I0924 19:50:52.817312   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.817320   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:52.817326   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:52.817387   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:52.851401   70152 cri.go:89] found id: ""
	I0924 19:50:52.851433   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.851442   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:52.851451   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:52.851463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:52.921634   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:52.921661   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:52.921674   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:53.005676   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:53.005710   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:53.042056   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:53.042092   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:53.092871   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:53.092908   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:55.605865   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:55.618713   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:55.618791   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:55.652326   70152 cri.go:89] found id: ""
	I0924 19:50:55.652354   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.652364   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:55.652372   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:55.652434   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:55.686218   70152 cri.go:89] found id: ""
	I0924 19:50:55.686241   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.686249   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:55.686256   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:55.686318   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:55.718678   70152 cri.go:89] found id: ""
	I0924 19:50:55.718704   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.718713   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:55.718720   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:55.718789   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:55.750122   70152 cri.go:89] found id: ""
	I0924 19:50:55.750149   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.750157   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:55.750163   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:55.750213   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:55.780676   70152 cri.go:89] found id: ""
	I0924 19:50:55.780706   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.780717   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:55.780724   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:55.780806   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:55.814742   70152 cri.go:89] found id: ""
	I0924 19:50:55.814771   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.814783   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:55.814790   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:55.814872   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:55.847599   70152 cri.go:89] found id: ""
	I0924 19:50:55.847624   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.847635   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:55.847643   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:55.847708   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:55.882999   70152 cri.go:89] found id: ""
	I0924 19:50:55.883025   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.883034   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:55.883042   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:55.883053   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:55.948795   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:55.948823   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:55.948840   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:56.032946   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:56.032984   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:56.069628   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:56.069657   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:56.118408   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:56.118444   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:53.478043   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:53.979410   69576 pod_ready.go:82] duration metric: took 4m0.007472265s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	E0924 19:50:53.979439   69576 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0924 19:50:53.979449   69576 pod_ready.go:39] duration metric: took 4m5.045187364s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:50:53.979468   69576 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:50:53.979501   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:53.979557   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:54.014613   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:54.014636   69576 cri.go:89] found id: ""
	I0924 19:50:54.014646   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:50:54.014702   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.019232   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:54.019304   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:54.054018   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:54.054042   69576 cri.go:89] found id: ""
	I0924 19:50:54.054050   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:50:54.054111   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.057867   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:54.057937   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:54.090458   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:54.090485   69576 cri.go:89] found id: ""
	I0924 19:50:54.090495   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:50:54.090549   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.094660   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:54.094735   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:54.128438   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:54.128462   69576 cri.go:89] found id: ""
	I0924 19:50:54.128471   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:50:54.128524   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.132209   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:54.132261   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:54.170563   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:54.170584   69576 cri.go:89] found id: ""
	I0924 19:50:54.170591   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:50:54.170640   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.174546   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:54.174615   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:54.211448   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:54.211468   69576 cri.go:89] found id: ""
	I0924 19:50:54.211475   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:50:54.211521   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.215297   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:54.215350   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:54.252930   69576 cri.go:89] found id: ""
	I0924 19:50:54.252955   69576 logs.go:276] 0 containers: []
	W0924 19:50:54.252963   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:54.252969   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:50:54.253023   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:50:54.296111   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:54.296135   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:54.296141   69576 cri.go:89] found id: ""
	I0924 19:50:54.296148   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:50:54.296194   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.299983   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.303864   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:50:54.303899   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:54.340679   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:54.340703   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:54.867298   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:50:54.867333   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:54.908630   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:54.908659   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:54.974028   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:50:54.974059   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:55.034164   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:50:55.034200   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:55.070416   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:50:55.070453   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:55.107831   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:50:55.107857   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:55.143183   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:55.143215   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:55.160049   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:55.160082   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:50:55.267331   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:50:55.267367   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:55.310718   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:50:55.310750   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:55.349628   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:50:55.349656   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:54.990762   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:57.490198   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:56.954742   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:58.955989   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:58.631571   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:58.645369   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:58.645437   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:58.679988   70152 cri.go:89] found id: ""
	I0924 19:50:58.680016   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.680027   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:58.680034   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:58.680095   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:58.717081   70152 cri.go:89] found id: ""
	I0924 19:50:58.717104   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.717114   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:58.717121   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:58.717182   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:58.749093   70152 cri.go:89] found id: ""
	I0924 19:50:58.749115   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.749124   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:58.749129   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:58.749175   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:58.785026   70152 cri.go:89] found id: ""
	I0924 19:50:58.785056   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.785078   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:58.785086   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:58.785174   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:58.821615   70152 cri.go:89] found id: ""
	I0924 19:50:58.821641   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.821651   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:58.821658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:58.821718   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:58.857520   70152 cri.go:89] found id: ""
	I0924 19:50:58.857549   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.857561   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:58.857569   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:58.857638   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:58.892972   70152 cri.go:89] found id: ""
	I0924 19:50:58.892997   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.893008   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:58.893016   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:58.893082   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:58.924716   70152 cri.go:89] found id: ""
	I0924 19:50:58.924743   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.924756   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:58.924764   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:58.924776   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:58.961221   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:58.961249   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:59.013865   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:59.013892   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:59.028436   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:59.028472   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:59.099161   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:59.099187   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:59.099201   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:57.916622   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:57.931591   69576 api_server.go:72] duration metric: took 4m15.73662766s to wait for apiserver process to appear ...
	I0924 19:50:57.931630   69576 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:50:57.931675   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:57.931721   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:57.969570   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:57.969597   69576 cri.go:89] found id: ""
	I0924 19:50:57.969604   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:50:57.969650   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:57.973550   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:57.973602   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:58.015873   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:58.015897   69576 cri.go:89] found id: ""
	I0924 19:50:58.015907   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:50:58.015959   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.020777   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:58.020848   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:58.052771   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:58.052792   69576 cri.go:89] found id: ""
	I0924 19:50:58.052801   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:50:58.052861   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.056640   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:58.056709   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:58.092869   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:58.092888   69576 cri.go:89] found id: ""
	I0924 19:50:58.092894   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:50:58.092949   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.097223   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:58.097293   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:58.131376   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:58.131403   69576 cri.go:89] found id: ""
	I0924 19:50:58.131414   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:50:58.131498   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.135886   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:58.135943   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:58.171962   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:58.171985   69576 cri.go:89] found id: ""
	I0924 19:50:58.171992   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:50:58.172037   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.175714   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:58.175770   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:58.209329   69576 cri.go:89] found id: ""
	I0924 19:50:58.209358   69576 logs.go:276] 0 containers: []
	W0924 19:50:58.209366   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:58.209372   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:50:58.209432   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:50:58.242311   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:58.242331   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:58.242336   69576 cri.go:89] found id: ""
	I0924 19:50:58.242344   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:50:58.242399   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.246774   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.250891   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:58.250909   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:58.736768   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:58.736811   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:50:58.838645   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:50:58.838673   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:58.884334   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:50:58.884366   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:58.933785   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:50:58.933817   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:58.968065   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:50:58.968099   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:59.007212   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:50:59.007238   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:59.067571   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:50:59.067608   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:59.103890   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:50:59.103913   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:59.157991   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:59.158021   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:59.225690   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:59.225724   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:59.239742   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:50:59.239768   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:59.272319   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:50:59.272354   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:01.809089   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:51:01.813972   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0924 19:51:01.815080   69576 api_server.go:141] control plane version: v1.31.1
	I0924 19:51:01.815100   69576 api_server.go:131] duration metric: took 3.883463484s to wait for apiserver health ...
	I0924 19:51:01.815107   69576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:51:01.815127   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:51:01.815166   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:51:01.857140   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:51:01.857164   69576 cri.go:89] found id: ""
	I0924 19:51:01.857174   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:51:01.857235   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.861136   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:51:01.861199   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:51:01.894133   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:51:01.894156   69576 cri.go:89] found id: ""
	I0924 19:51:01.894165   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:51:01.894222   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.898001   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:51:01.898073   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:51:01.933652   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:51:01.933677   69576 cri.go:89] found id: ""
	I0924 19:51:01.933686   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:51:01.933762   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.938487   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:51:01.938549   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:51:01.979500   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:01.979527   69576 cri.go:89] found id: ""
	I0924 19:51:01.979536   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:51:01.979597   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.983762   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:51:01.983827   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:51:02.024402   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:51:02.024427   69576 cri.go:89] found id: ""
	I0924 19:51:02.024436   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:51:02.024501   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.028273   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:51:02.028330   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:51:02.070987   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:51:02.071006   69576 cri.go:89] found id: ""
	I0924 19:51:02.071013   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:51:02.071058   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.076176   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:51:02.076244   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:51:02.119921   69576 cri.go:89] found id: ""
	I0924 19:51:02.119950   69576 logs.go:276] 0 containers: []
	W0924 19:51:02.119960   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:51:02.119967   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:51:02.120026   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:51:02.156531   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:51:02.156562   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:51:02.156568   69576 cri.go:89] found id: ""
	I0924 19:51:02.156577   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:51:02.156643   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.161262   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.165581   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:51:02.165602   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:51:02.216300   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:51:02.216327   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:51:02.262879   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:51:02.262909   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:59.490689   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:01.992004   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:02.984419   69904 pod_ready.go:82] duration metric: took 4m0.00033045s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" ...
	E0924 19:51:02.984461   69904 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 19:51:02.984478   69904 pod_ready.go:39] duration metric: took 4m13.271652912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:02.984508   69904 kubeadm.go:597] duration metric: took 4m21.208228185s to restartPrimaryControlPlane
	W0924 19:51:02.984576   69904 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:02.984610   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:02.643876   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:51:02.643917   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:51:02.680131   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:51:02.680170   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:51:02.693192   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:51:02.693225   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:51:02.788649   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:51:02.788678   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:51:02.836539   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:51:02.836571   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:51:02.889363   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:51:02.889393   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:02.925388   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:51:02.925416   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:51:02.962512   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:51:02.962545   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:51:02.999119   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:51:02.999144   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:51:03.072647   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:51:03.072683   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:51:05.629114   69576 system_pods.go:59] 8 kube-system pods found
	I0924 19:51:05.629141   69576 system_pods.go:61] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running
	I0924 19:51:05.629145   69576 system_pods.go:61] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running
	I0924 19:51:05.629149   69576 system_pods.go:61] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running
	I0924 19:51:05.629153   69576 system_pods.go:61] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running
	I0924 19:51:05.629156   69576 system_pods.go:61] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running
	I0924 19:51:05.629159   69576 system_pods.go:61] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running
	I0924 19:51:05.629164   69576 system_pods.go:61] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:05.629169   69576 system_pods.go:61] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:51:05.629177   69576 system_pods.go:74] duration metric: took 3.814063168s to wait for pod list to return data ...
	I0924 19:51:05.629183   69576 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:51:05.632105   69576 default_sa.go:45] found service account: "default"
	I0924 19:51:05.632126   69576 default_sa.go:55] duration metric: took 2.937635ms for default service account to be created ...
	I0924 19:51:05.632133   69576 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:51:05.637121   69576 system_pods.go:86] 8 kube-system pods found
	I0924 19:51:05.637152   69576 system_pods.go:89] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running
	I0924 19:51:05.637160   69576 system_pods.go:89] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running
	I0924 19:51:05.637167   69576 system_pods.go:89] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running
	I0924 19:51:05.637174   69576 system_pods.go:89] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running
	I0924 19:51:05.637179   69576 system_pods.go:89] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running
	I0924 19:51:05.637185   69576 system_pods.go:89] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running
	I0924 19:51:05.637196   69576 system_pods.go:89] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:05.637205   69576 system_pods.go:89] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:51:05.637214   69576 system_pods.go:126] duration metric: took 5.075319ms to wait for k8s-apps to be running ...
	I0924 19:51:05.637222   69576 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:51:05.637264   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:05.654706   69576 system_svc.go:56] duration metric: took 17.472783ms WaitForService to wait for kubelet
	I0924 19:51:05.654809   69576 kubeadm.go:582] duration metric: took 4m23.459841471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:51:05.654865   69576 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:51:05.658334   69576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:51:05.658353   69576 node_conditions.go:123] node cpu capacity is 2
	I0924 19:51:05.658363   69576 node_conditions.go:105] duration metric: took 3.492035ms to run NodePressure ...
	I0924 19:51:05.658373   69576 start.go:241] waiting for startup goroutines ...
	I0924 19:51:05.658379   69576 start.go:246] waiting for cluster config update ...
	I0924 19:51:05.658389   69576 start.go:255] writing updated cluster config ...
	I0924 19:51:05.658691   69576 ssh_runner.go:195] Run: rm -f paused
	I0924 19:51:05.706059   69576 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:51:05.708303   69576 out.go:177] * Done! kubectl is now configured to use "no-preload-965745" cluster and "default" namespace by default
	I0924 19:51:01.454367   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:03.954114   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:05.955269   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:01.696298   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:01.709055   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:51:01.709132   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:51:01.741383   70152 cri.go:89] found id: ""
	I0924 19:51:01.741409   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.741416   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:51:01.741422   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:51:01.741476   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:51:01.773123   70152 cri.go:89] found id: ""
	I0924 19:51:01.773148   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.773156   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:51:01.773162   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:51:01.773221   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:51:01.806752   70152 cri.go:89] found id: ""
	I0924 19:51:01.806784   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.806792   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:51:01.806798   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:51:01.806928   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:51:01.851739   70152 cri.go:89] found id: ""
	I0924 19:51:01.851769   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.851780   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:51:01.851786   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:51:01.851850   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:51:01.885163   70152 cri.go:89] found id: ""
	I0924 19:51:01.885192   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.885201   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:51:01.885207   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:51:01.885255   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:51:01.918891   70152 cri.go:89] found id: ""
	I0924 19:51:01.918918   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.918929   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:51:01.918936   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:51:01.918996   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:51:01.953367   70152 cri.go:89] found id: ""
	I0924 19:51:01.953394   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.953403   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:51:01.953411   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:51:01.953468   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:51:01.993937   70152 cri.go:89] found id: ""
	I0924 19:51:01.993961   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.993970   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:51:01.993981   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:51:01.993993   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:51:02.049467   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:51:02.049503   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:51:02.065074   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:51:02.065117   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:51:02.141811   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:51:02.141837   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:51:02.141852   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:51:02.224507   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:51:02.224534   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:51:04.766806   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:04.779518   70152 kubeadm.go:597] duration metric: took 4m3.458373s to restartPrimaryControlPlane
	W0924 19:51:04.779588   70152 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:04.779617   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:09.285959   70152 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.506320559s)
	I0924 19:51:09.286033   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:09.299784   70152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:51:09.311238   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:51:09.320580   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:51:09.320603   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:51:09.320658   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:51:09.329216   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:51:09.329281   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:51:09.337964   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:51:09.346324   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:51:09.346383   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:51:09.354788   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:51:09.363191   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:51:09.363249   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:51:09.372141   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:51:09.380290   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:51:09.380344   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:51:09.388996   70152 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:51:09.456034   70152 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:51:09.456144   70152 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:51:09.585473   70152 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:51:09.585697   70152 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:51:09.585935   70152 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:51:09.749623   70152 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:51:09.751504   70152 out.go:235]   - Generating certificates and keys ...
	I0924 19:51:09.751599   70152 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:51:09.751702   70152 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:51:09.751845   70152 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:51:09.751955   70152 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:51:09.752059   70152 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:51:09.752137   70152 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:51:09.752237   70152 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:51:09.752332   70152 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:51:09.752430   70152 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:51:09.752536   70152 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:51:09.752602   70152 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:51:09.752683   70152 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:51:09.881554   70152 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:51:10.269203   70152 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:51:10.518480   70152 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:51:10.712060   70152 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:51:10.727454   70152 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:51:10.728411   70152 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:51:10.728478   70152 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:51:10.849448   70152 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:51:08.454350   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:10.455005   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:10.851100   70152 out.go:235]   - Booting up control plane ...
	I0924 19:51:10.851237   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:51:10.860097   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:51:10.860987   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:51:10.861716   70152 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:51:10.863845   70152 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:51:12.954243   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:14.957843   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:17.453731   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:19.453953   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:21.454522   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:23.455166   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:25.953843   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:29.077330   69904 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.092691625s)
	I0924 19:51:29.077484   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:29.091493   69904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:51:29.101026   69904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:51:29.109749   69904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:51:29.109768   69904 kubeadm.go:157] found existing configuration files:
	
	I0924 19:51:29.109814   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 19:51:29.118177   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:51:29.118225   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:51:29.126963   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 19:51:29.135458   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:51:29.135514   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:51:29.144373   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 19:51:29.153026   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:51:29.153104   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:51:29.162719   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 19:51:29.171667   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:51:29.171722   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:51:29.180370   69904 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:51:29.220747   69904 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 19:51:29.220873   69904 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:51:29.319144   69904 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:51:29.319289   69904 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:51:29.319416   69904 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 19:51:29.328410   69904 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:51:29.329855   69904 out.go:235]   - Generating certificates and keys ...
	I0924 19:51:29.329956   69904 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:51:29.330042   69904 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:51:29.330148   69904 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:51:29.330251   69904 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:51:29.330369   69904 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:51:29.330451   69904 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:51:29.330557   69904 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:51:29.330668   69904 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:51:29.330772   69904 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:51:29.330900   69904 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:51:29.330966   69904 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:51:29.331042   69904 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:51:29.504958   69904 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:51:29.642370   69904 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 19:51:29.735556   69904 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:51:29.870700   69904 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:51:30.048778   69904 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:51:30.049481   69904 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:51:30.052686   69904 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:51:27.954118   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:29.955223   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:30.054684   69904 out.go:235]   - Booting up control plane ...
	I0924 19:51:30.054786   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:51:30.054935   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:51:30.055710   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:51:30.073679   69904 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:51:30.079375   69904 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:51:30.079437   69904 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:51:30.208692   69904 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 19:51:30.208799   69904 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 19:51:31.210485   69904 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001878491s
	I0924 19:51:31.210602   69904 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 19:51:35.712648   69904 kubeadm.go:310] [api-check] The API server is healthy after 4.501942024s
	I0924 19:51:35.726167   69904 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 19:51:35.745115   69904 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 19:51:35.778631   69904 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 19:51:35.778910   69904 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-093771 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 19:51:35.793809   69904 kubeadm.go:310] [bootstrap-token] Using token: joc3du.4csctmt42s6jz0an
	I0924 19:51:31.955402   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:33.956250   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:35.949705   69408 pod_ready.go:82] duration metric: took 4m0.001155579s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" ...
	E0924 19:51:35.949733   69408 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 19:51:35.949755   69408 pod_ready.go:39] duration metric: took 4m8.530526042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:35.949787   69408 kubeadm.go:597] duration metric: took 4m16.768464943s to restartPrimaryControlPlane
	W0924 19:51:35.949874   69408 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:35.949908   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:35.795255   69904 out.go:235]   - Configuring RBAC rules ...
	I0924 19:51:35.795389   69904 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 19:51:35.800809   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 19:51:35.819531   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 19:51:35.825453   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 19:51:35.831439   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 19:51:35.835651   69904 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 19:51:36.119903   69904 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 19:51:36.554891   69904 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 19:51:37.120103   69904 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 19:51:37.121012   69904 kubeadm.go:310] 
	I0924 19:51:37.121125   69904 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 19:51:37.121146   69904 kubeadm.go:310] 
	I0924 19:51:37.121242   69904 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 19:51:37.121260   69904 kubeadm.go:310] 
	I0924 19:51:37.121309   69904 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 19:51:37.121403   69904 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 19:51:37.121469   69904 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 19:51:37.121477   69904 kubeadm.go:310] 
	I0924 19:51:37.121557   69904 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 19:51:37.121578   69904 kubeadm.go:310] 
	I0924 19:51:37.121659   69904 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 19:51:37.121674   69904 kubeadm.go:310] 
	I0924 19:51:37.121765   69904 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 19:51:37.121891   69904 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 19:51:37.122002   69904 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 19:51:37.122013   69904 kubeadm.go:310] 
	I0924 19:51:37.122122   69904 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 19:51:37.122239   69904 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 19:51:37.122247   69904 kubeadm.go:310] 
	I0924 19:51:37.122333   69904 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token joc3du.4csctmt42s6jz0an \
	I0924 19:51:37.122470   69904 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 19:51:37.122509   69904 kubeadm.go:310] 	--control-plane 
	I0924 19:51:37.122520   69904 kubeadm.go:310] 
	I0924 19:51:37.122598   69904 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 19:51:37.122606   69904 kubeadm.go:310] 
	I0924 19:51:37.122720   69904 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token joc3du.4csctmt42s6jz0an \
	I0924 19:51:37.122884   69904 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 19:51:37.124443   69904 kubeadm.go:310] W0924 19:51:29.206815    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:51:37.124730   69904 kubeadm.go:310] W0924 19:51:29.207506    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:51:37.124872   69904 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:51:37.124908   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:51:37.124921   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:51:37.126897   69904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:51:37.128457   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:51:37.138516   69904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:51:37.154747   69904 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:51:37.154812   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:37.154860   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-093771 minikube.k8s.io/updated_at=2024_09_24T19_51_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=default-k8s-diff-port-093771 minikube.k8s.io/primary=true
	I0924 19:51:37.178892   69904 ops.go:34] apiserver oom_adj: -16
	I0924 19:51:37.364019   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:37.864960   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:38.364223   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:38.864189   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:39.365144   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:39.864326   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:40.364143   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:40.864333   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:41.364236   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:41.461496   69904 kubeadm.go:1113] duration metric: took 4.30674912s to wait for elevateKubeSystemPrivileges
	I0924 19:51:41.461536   69904 kubeadm.go:394] duration metric: took 4m59.728895745s to StartCluster
	I0924 19:51:41.461557   69904 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:51:41.461654   69904 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:51:41.464153   69904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:51:41.464416   69904 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:51:41.464620   69904 config.go:182] Loaded profile config "default-k8s-diff-port-093771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:51:41.464553   69904 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:51:41.464699   69904 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464718   69904 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-093771"
	I0924 19:51:41.464722   69904 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464753   69904 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-093771"
	I0924 19:51:41.464753   69904 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464774   69904 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-093771"
	W0924 19:51:41.464786   69904 addons.go:243] addon metrics-server should already be in state true
	I0924 19:51:41.464824   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	W0924 19:51:41.464729   69904 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:51:41.464894   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	I0924 19:51:41.465192   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465211   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465211   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465242   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.465280   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.465229   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.466016   69904 out.go:177] * Verifying Kubernetes components...
	I0924 19:51:41.467370   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:51:41.480937   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0924 19:51:41.481105   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46867
	I0924 19:51:41.481377   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.481596   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.482008   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.482032   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.482119   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.482139   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.482420   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.482453   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.482636   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.483038   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.483079   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.484535   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
	I0924 19:51:41.486427   69904 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-093771"
	W0924 19:51:41.486572   69904 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:51:41.486612   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	I0924 19:51:41.486941   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.487097   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.487145   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.487517   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.487536   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.487866   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.488447   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.488493   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.502934   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0924 19:51:41.503244   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45593
	I0924 19:51:41.503446   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.503810   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.503904   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.503920   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.504266   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.504281   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.504327   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.504742   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.504768   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.505104   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.505295   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.508446   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I0924 19:51:41.508449   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.508839   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.509365   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.509388   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.509739   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.509898   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.510390   69904 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:51:41.511622   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.511801   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:51:41.511818   69904 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:51:41.511838   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.513430   69904 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:51:41.514819   69904 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:51:41.514853   69904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:51:41.514871   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.515131   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.515838   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.515903   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.515983   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.516096   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.516270   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.516423   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.518636   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.519167   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.519192   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.519477   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.519709   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.519885   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.520037   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.522168   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0924 19:51:41.522719   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.523336   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.523360   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.523663   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.523857   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.525469   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.525702   69904 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:51:41.525718   69904 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:51:41.525738   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.528613   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.529122   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.529142   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.529384   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.529572   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.529764   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.529913   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.666584   69904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:51:41.685485   69904 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-093771" to be "Ready" ...
	I0924 19:51:41.701712   69904 node_ready.go:49] node "default-k8s-diff-port-093771" has status "Ready":"True"
	I0924 19:51:41.701735   69904 node_ready.go:38] duration metric: took 16.218729ms for node "default-k8s-diff-port-093771" to be "Ready" ...
	I0924 19:51:41.701745   69904 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:41.732271   69904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:41.759846   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:51:41.850208   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:51:41.854353   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:51:41.854372   69904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:51:41.884080   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:51:41.884109   69904 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:51:41.924130   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:51:41.924161   69904 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:51:41.956667   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.956699   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.957030   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.957044   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.957051   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.957058   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.957319   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.957378   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.957353   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:41.964614   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.964632   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.964934   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.964953   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.988158   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:51:42.871520   69904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.021277105s)
	I0924 19:51:42.871575   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:42.871586   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:42.871871   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:42.871892   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:42.871905   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:42.871918   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:42.872184   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:42.872237   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:42.872259   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:43.106973   69904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.118760493s)
	I0924 19:51:43.107032   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:43.107047   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:43.107342   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:43.107375   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:43.107389   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:43.107403   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:43.107414   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:43.107682   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:43.107697   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:43.107715   69904 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-093771"
	I0924 19:51:43.109818   69904 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0924 19:51:43.111542   69904 addons.go:510] duration metric: took 1.646997004s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0924 19:51:43.738989   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:45.738584   69904 pod_ready.go:93] pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:45.738610   69904 pod_ready.go:82] duration metric: took 4.006305736s for pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:45.738622   69904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:47.746429   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:50.864744   70152 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:51:50.865098   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:51:50.865318   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:51:50.245581   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:51.745840   69904 pod_ready.go:93] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.745870   69904 pod_ready.go:82] duration metric: took 6.007240203s for pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.745888   69904 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.754529   69904 pod_ready.go:93] pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.754556   69904 pod_ready.go:82] duration metric: took 8.660403ms for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.754569   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.764561   69904 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.764589   69904 pod_ready.go:82] duration metric: took 10.010012ms for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.764603   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.771177   69904 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.771205   69904 pod_ready.go:82] duration metric: took 6.593267ms for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.771218   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5rw7b" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.775929   69904 pod_ready.go:93] pod "kube-proxy-5rw7b" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.775952   69904 pod_ready.go:82] duration metric: took 4.726185ms for pod "kube-proxy-5rw7b" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.775964   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:52.143343   69904 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:52.143367   69904 pod_ready.go:82] duration metric: took 367.395759ms for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:52.143375   69904 pod_ready.go:39] duration metric: took 10.441621626s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:52.143388   69904 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:51:52.143433   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:52.157316   69904 api_server.go:72] duration metric: took 10.69286406s to wait for apiserver process to appear ...
	I0924 19:51:52.157344   69904 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:51:52.157363   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:51:52.162550   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 200:
	ok
	I0924 19:51:52.163431   69904 api_server.go:141] control plane version: v1.31.1
	I0924 19:51:52.163453   69904 api_server.go:131] duration metric: took 6.102223ms to wait for apiserver health ...
	I0924 19:51:52.163465   69904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:51:52.346998   69904 system_pods.go:59] 9 kube-system pods found
	I0924 19:51:52.347026   69904 system_pods.go:61] "coredns-7c65d6cfc9-87t62" [b4be73eb-defb-4cc1-84f7-d34dccab4a2c] Running
	I0924 19:51:52.347031   69904 system_pods.go:61] "coredns-7c65d6cfc9-nzssp" [ecf276cd-9aa0-4a0b-81b6-da38271d10ed] Running
	I0924 19:51:52.347036   69904 system_pods.go:61] "etcd-default-k8s-diff-port-093771" [809f2c90-7cfc-4c77-a078-7883a7c6f2ac] Running
	I0924 19:51:52.347039   69904 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-093771" [2d297125-52bd-4c17-ab57-89911bb046e7] Running
	I0924 19:51:52.347043   69904 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-093771" [9e3c3d16-5e5d-4ebf-9ade-24cb40b9e836] Running
	I0924 19:51:52.347046   69904 system_pods.go:61] "kube-proxy-5rw7b" [f2916b6c-1a6f-4766-8543-0d846f559710] Running
	I0924 19:51:52.347049   69904 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-093771" [d1db09ad-d2e9-4453-b354-379bbb4081bf] Running
	I0924 19:51:52.347055   69904 system_pods.go:61] "metrics-server-6867b74b74-gnlkd" [a3b6c4f7-47e1-48a3-adff-1690db5cea3b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:52.347059   69904 system_pods.go:61] "storage-provisioner" [591605b2-de7e-4dc1-903b-f8102ccc3770] Running
	I0924 19:51:52.347067   69904 system_pods.go:74] duration metric: took 183.595946ms to wait for pod list to return data ...
	I0924 19:51:52.347074   69904 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:51:52.542476   69904 default_sa.go:45] found service account: "default"
	I0924 19:51:52.542504   69904 default_sa.go:55] duration metric: took 195.421838ms for default service account to be created ...
	I0924 19:51:52.542514   69904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:51:52.747902   69904 system_pods.go:86] 9 kube-system pods found
	I0924 19:51:52.747936   69904 system_pods.go:89] "coredns-7c65d6cfc9-87t62" [b4be73eb-defb-4cc1-84f7-d34dccab4a2c] Running
	I0924 19:51:52.747943   69904 system_pods.go:89] "coredns-7c65d6cfc9-nzssp" [ecf276cd-9aa0-4a0b-81b6-da38271d10ed] Running
	I0924 19:51:52.747950   69904 system_pods.go:89] "etcd-default-k8s-diff-port-093771" [809f2c90-7cfc-4c77-a078-7883a7c6f2ac] Running
	I0924 19:51:52.747955   69904 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-093771" [2d297125-52bd-4c17-ab57-89911bb046e7] Running
	I0924 19:51:52.747961   69904 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-093771" [9e3c3d16-5e5d-4ebf-9ade-24cb40b9e836] Running
	I0924 19:51:52.747966   69904 system_pods.go:89] "kube-proxy-5rw7b" [f2916b6c-1a6f-4766-8543-0d846f559710] Running
	I0924 19:51:52.747971   69904 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-093771" [d1db09ad-d2e9-4453-b354-379bbb4081bf] Running
	I0924 19:51:52.747981   69904 system_pods.go:89] "metrics-server-6867b74b74-gnlkd" [a3b6c4f7-47e1-48a3-adff-1690db5cea3b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:52.747988   69904 system_pods.go:89] "storage-provisioner" [591605b2-de7e-4dc1-903b-f8102ccc3770] Running
	I0924 19:51:52.748002   69904 system_pods.go:126] duration metric: took 205.481542ms to wait for k8s-apps to be running ...
	I0924 19:51:52.748010   69904 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:51:52.748069   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:52.763092   69904 system_svc.go:56] duration metric: took 15.071727ms WaitForService to wait for kubelet
	I0924 19:51:52.763121   69904 kubeadm.go:582] duration metric: took 11.298674484s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:51:52.763141   69904 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:51:52.942890   69904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:51:52.942915   69904 node_conditions.go:123] node cpu capacity is 2
	I0924 19:51:52.942925   69904 node_conditions.go:105] duration metric: took 179.779826ms to run NodePressure ...
	I0924 19:51:52.942935   69904 start.go:241] waiting for startup goroutines ...
	I0924 19:51:52.942941   69904 start.go:246] waiting for cluster config update ...
	I0924 19:51:52.942951   69904 start.go:255] writing updated cluster config ...
	I0924 19:51:52.943201   69904 ssh_runner.go:195] Run: rm -f paused
	I0924 19:51:52.992952   69904 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:51:52.995076   69904 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-093771" cluster and "default" namespace by default
	I0924 19:51:55.865870   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:51:55.866074   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:52:02.110619   69408 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.160686078s)
	I0924 19:52:02.110702   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:52:02.124706   69408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:52:02.133983   69408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:52:02.142956   69408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:52:02.142980   69408 kubeadm.go:157] found existing configuration files:
	
	I0924 19:52:02.143027   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:52:02.151037   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:52:02.151101   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:52:02.160469   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:52:02.168827   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:52:02.168889   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:52:02.177644   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:52:02.186999   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:52:02.187064   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:52:02.195935   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:52:02.204688   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:52:02.204763   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:52:02.213564   69408 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:52:02.259426   69408 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 19:52:02.259587   69408 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:52:02.355605   69408 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:52:02.355774   69408 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:52:02.355928   69408 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 19:52:02.363355   69408 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:52:02.365307   69408 out.go:235]   - Generating certificates and keys ...
	I0924 19:52:02.365423   69408 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:52:02.365526   69408 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:52:02.365688   69408 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:52:02.365773   69408 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:52:02.365879   69408 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:52:02.365955   69408 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:52:02.366061   69408 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:52:02.366149   69408 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:52:02.366257   69408 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:52:02.366362   69408 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:52:02.366417   69408 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:52:02.366502   69408 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:52:02.551857   69408 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:52:02.836819   69408 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 19:52:03.096479   69408 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:52:03.209489   69408 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:52:03.274701   69408 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:52:03.275214   69408 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:52:03.277917   69408 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:52:03.279804   69408 out.go:235]   - Booting up control plane ...
	I0924 19:52:03.279909   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:52:03.280022   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:52:03.280130   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:52:03.297451   69408 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:52:03.304789   69408 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:52:03.304840   69408 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:52:03.423280   69408 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 19:52:03.423394   69408 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 19:52:03.925128   69408 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.985266ms
	I0924 19:52:03.925262   69408 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 19:52:05.866171   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:52:05.866441   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:52:08.429070   69408 kubeadm.go:310] [api-check] The API server is healthy after 4.502084393s
	I0924 19:52:08.439108   69408 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 19:52:08.455261   69408 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 19:52:08.479883   69408 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 19:52:08.480145   69408 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-311319 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 19:52:08.490294   69408 kubeadm.go:310] [bootstrap-token] Using token: ugx0qk.6i7lm67tfu0foozy
	I0924 19:52:08.491600   69408 out.go:235]   - Configuring RBAC rules ...
	I0924 19:52:08.491741   69408 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 19:52:08.496142   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 19:52:08.502704   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 19:52:08.508752   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 19:52:08.512088   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 19:52:08.515855   69408 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 19:52:08.837286   69408 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 19:52:09.278937   69408 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 19:52:09.835442   69408 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 19:52:09.836889   69408 kubeadm.go:310] 
	I0924 19:52:09.836953   69408 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 19:52:09.836967   69408 kubeadm.go:310] 
	I0924 19:52:09.837040   69408 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 19:52:09.837048   69408 kubeadm.go:310] 
	I0924 19:52:09.837068   69408 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 19:52:09.837117   69408 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 19:52:09.837167   69408 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 19:52:09.837174   69408 kubeadm.go:310] 
	I0924 19:52:09.837238   69408 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 19:52:09.837246   69408 kubeadm.go:310] 
	I0924 19:52:09.837297   69408 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 19:52:09.837307   69408 kubeadm.go:310] 
	I0924 19:52:09.837371   69408 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 19:52:09.837490   69408 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 19:52:09.837611   69408 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 19:52:09.837630   69408 kubeadm.go:310] 
	I0924 19:52:09.837706   69408 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 19:52:09.837774   69408 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 19:52:09.837780   69408 kubeadm.go:310] 
	I0924 19:52:09.837851   69408 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ugx0qk.6i7lm67tfu0foozy \
	I0924 19:52:09.837951   69408 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 19:52:09.837979   69408 kubeadm.go:310] 	--control-plane 
	I0924 19:52:09.837992   69408 kubeadm.go:310] 
	I0924 19:52:09.838087   69408 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 19:52:09.838100   69408 kubeadm.go:310] 
	I0924 19:52:09.838204   69408 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ugx0qk.6i7lm67tfu0foozy \
	I0924 19:52:09.838325   69408 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 19:52:09.839629   69408 kubeadm.go:310] W0924 19:52:02.243473    2529 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:52:09.839919   69408 kubeadm.go:310] W0924 19:52:02.244730    2529 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:52:09.840040   69408 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:52:09.840056   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:52:09.840067   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:52:09.842039   69408 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:52:09.843562   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:52:09.855620   69408 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:52:09.873291   69408 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:52:09.873381   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:09.873401   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-311319 minikube.k8s.io/updated_at=2024_09_24T19_52_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=embed-certs-311319 minikube.k8s.io/primary=true
	I0924 19:52:09.898351   69408 ops.go:34] apiserver oom_adj: -16
	I0924 19:52:10.043641   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:10.544445   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:11.043725   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:11.543862   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:12.043769   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:12.543723   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:13.044577   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:13.544545   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.043885   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.544454   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.663140   69408 kubeadm.go:1113] duration metric: took 4.789827964s to wait for elevateKubeSystemPrivileges
	I0924 19:52:14.663181   69408 kubeadm.go:394] duration metric: took 4m55.527467072s to StartCluster
	I0924 19:52:14.663202   69408 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:52:14.663295   69408 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:52:14.665852   69408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:52:14.666123   69408 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:52:14.666181   69408 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:52:14.666281   69408 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-311319"
	I0924 19:52:14.666302   69408 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-311319"
	I0924 19:52:14.666298   69408 addons.go:69] Setting default-storageclass=true in profile "embed-certs-311319"
	W0924 19:52:14.666315   69408 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:52:14.666324   69408 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-311319"
	I0924 19:52:14.666347   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.666357   69408 config.go:182] Loaded profile config "embed-certs-311319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:52:14.666407   69408 addons.go:69] Setting metrics-server=true in profile "embed-certs-311319"
	I0924 19:52:14.666424   69408 addons.go:234] Setting addon metrics-server=true in "embed-certs-311319"
	W0924 19:52:14.666432   69408 addons.go:243] addon metrics-server should already be in state true
	I0924 19:52:14.666462   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.666762   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666766   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666803   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.666863   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.666899   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666937   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.667748   69408 out.go:177] * Verifying Kubernetes components...
	I0924 19:52:14.669166   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:52:14.684612   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39209
	I0924 19:52:14.684876   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0924 19:52:14.685146   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.685266   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.685645   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.685662   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.685689   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35475
	I0924 19:52:14.685786   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.685806   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.686027   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.686034   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.686125   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.686517   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.686559   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.686617   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.686617   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.686638   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.686666   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.687118   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.687348   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.690029   69408 addons.go:234] Setting addon default-storageclass=true in "embed-certs-311319"
	W0924 19:52:14.690047   69408 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:52:14.690067   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.690357   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.690389   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.705119   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0924 19:52:14.705473   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42153
	I0924 19:52:14.705613   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.705983   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.706260   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.706283   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.706433   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.706458   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.706673   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.706793   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.706937   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.706989   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.708118   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I0924 19:52:14.708552   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.708751   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.709269   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.709288   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.709312   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.709894   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.710364   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.710405   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.710737   69408 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:52:14.710846   69408 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:52:14.711925   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:52:14.711937   69408 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:52:14.711951   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.712493   69408 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:52:14.712506   69408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:52:14.712521   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.716365   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716390   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.716402   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716511   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.716639   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.716738   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.716763   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716820   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.717468   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.717490   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.717691   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.717856   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.718038   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.718356   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.729081   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43771
	I0924 19:52:14.729516   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.730022   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.730040   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.730363   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.730541   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.732272   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.732526   69408 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:52:14.732545   69408 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:52:14.732564   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.735618   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.736196   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.736220   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.736269   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.736499   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.736675   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.736823   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.869932   69408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:52:14.906644   69408 node_ready.go:35] waiting up to 6m0s for node "embed-certs-311319" to be "Ready" ...
	I0924 19:52:14.914856   69408 node_ready.go:49] node "embed-certs-311319" has status "Ready":"True"
	I0924 19:52:14.914884   69408 node_ready.go:38] duration metric: took 8.205314ms for node "embed-certs-311319" to be "Ready" ...
	I0924 19:52:14.914893   69408 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:52:14.919969   69408 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:15.014078   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:52:15.014101   69408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:52:15.052737   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:52:15.064467   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:52:15.065858   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:52:15.065877   69408 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:52:15.137882   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:52:15.137902   69408 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:52:15.222147   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:52:15.331245   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.331279   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.331622   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.331647   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.331656   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.331664   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.331624   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:15.331894   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.331910   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.331898   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:15.339921   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.339937   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.340159   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.340203   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.340235   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.048748   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.048769   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.049094   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.049133   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.049144   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.049152   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.049159   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.049489   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.049524   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.049544   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.149500   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.149522   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.149817   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.149877   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.149903   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.149917   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.149926   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.150145   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.150159   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.150182   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.150191   69408 addons.go:475] Verifying addon metrics-server=true in "embed-certs-311319"
	I0924 19:52:16.151648   69408 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0924 19:52:16.153171   69408 addons.go:510] duration metric: took 1.486993032s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0924 19:52:16.925437   69408 pod_ready.go:103] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"False"
	I0924 19:52:18.926343   69408 pod_ready.go:103] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"False"
	I0924 19:52:20.928047   69408 pod_ready.go:93] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.928068   69408 pod_ready.go:82] duration metric: took 6.008077725s for pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.928076   69408 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.933100   69408 pod_ready.go:93] pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.933119   69408 pod_ready.go:82] duration metric: took 5.035858ms for pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.933127   69408 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.938200   69408 pod_ready.go:93] pod "etcd-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.938215   69408 pod_ready.go:82] duration metric: took 5.082837ms for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.938223   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.942124   69408 pod_ready.go:93] pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.942143   69408 pod_ready.go:82] duration metric: took 3.912415ms for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.942154   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.946306   69408 pod_ready.go:93] pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.946323   69408 pod_ready.go:82] duration metric: took 4.162782ms for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.946330   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h42s7" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.323768   69408 pod_ready.go:93] pod "kube-proxy-h42s7" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:21.323794   69408 pod_ready.go:82] duration metric: took 377.456852ms for pod "kube-proxy-h42s7" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.323806   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.723714   69408 pod_ready.go:93] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:21.723742   69408 pod_ready.go:82] duration metric: took 399.928048ms for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.723752   69408 pod_ready.go:39] duration metric: took 6.808848583s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:52:21.723769   69408 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:52:21.723835   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:52:21.738273   69408 api_server.go:72] duration metric: took 7.072120167s to wait for apiserver process to appear ...
	I0924 19:52:21.738301   69408 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:52:21.738353   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:52:21.743391   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 200:
	ok
	I0924 19:52:21.744346   69408 api_server.go:141] control plane version: v1.31.1
	I0924 19:52:21.744361   69408 api_server.go:131] duration metric: took 6.053884ms to wait for apiserver health ...
	I0924 19:52:21.744368   69408 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:52:21.926453   69408 system_pods.go:59] 9 kube-system pods found
	I0924 19:52:21.926485   69408 system_pods.go:61] "coredns-7c65d6cfc9-jsvdk" [da741136-c1ce-436f-9df0-e447b067265f] Running
	I0924 19:52:21.926493   69408 system_pods.go:61] "coredns-7c65d6cfc9-qgfvt" [7e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74] Running
	I0924 19:52:21.926499   69408 system_pods.go:61] "etcd-embed-certs-311319" [543c64c6-453b-4d42-b6a8-5b25577b3b8a] Running
	I0924 19:52:21.926505   69408 system_pods.go:61] "kube-apiserver-embed-certs-311319" [c1cd4c65-07a6-4d53-8f1d-438a8efdcdfa] Running
	I0924 19:52:21.926510   69408 system_pods.go:61] "kube-controller-manager-embed-certs-311319" [eece1531-5f24-4853-9e91-ca29558f3b9d] Running
	I0924 19:52:21.926517   69408 system_pods.go:61] "kube-proxy-h42s7" [76930a49-6a8a-4d02-84b8-8e26f3196ac3] Running
	I0924 19:52:21.926522   69408 system_pods.go:61] "kube-scheduler-embed-certs-311319" [22d20361-552d-4443-bec2-e406919d2966] Running
	I0924 19:52:21.926531   69408 system_pods.go:61] "metrics-server-6867b74b74-xnwm4" [dc64f26b-e4a6-4692-83d5-e6c794c1b130] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:52:21.926540   69408 system_pods.go:61] "storage-provisioner" [766bdfe2-684a-47de-94fd-088795b60e2b] Running
	I0924 19:52:21.926551   69408 system_pods.go:74] duration metric: took 182.176397ms to wait for pod list to return data ...
	I0924 19:52:21.926562   69408 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:52:22.123871   69408 default_sa.go:45] found service account: "default"
	I0924 19:52:22.123896   69408 default_sa.go:55] duration metric: took 197.328478ms for default service account to be created ...
	I0924 19:52:22.123911   69408 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:52:22.327585   69408 system_pods.go:86] 9 kube-system pods found
	I0924 19:52:22.327616   69408 system_pods.go:89] "coredns-7c65d6cfc9-jsvdk" [da741136-c1ce-436f-9df0-e447b067265f] Running
	I0924 19:52:22.327625   69408 system_pods.go:89] "coredns-7c65d6cfc9-qgfvt" [7e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74] Running
	I0924 19:52:22.327630   69408 system_pods.go:89] "etcd-embed-certs-311319" [543c64c6-453b-4d42-b6a8-5b25577b3b8a] Running
	I0924 19:52:22.327636   69408 system_pods.go:89] "kube-apiserver-embed-certs-311319" [c1cd4c65-07a6-4d53-8f1d-438a8efdcdfa] Running
	I0924 19:52:22.327641   69408 system_pods.go:89] "kube-controller-manager-embed-certs-311319" [eece1531-5f24-4853-9e91-ca29558f3b9d] Running
	I0924 19:52:22.327647   69408 system_pods.go:89] "kube-proxy-h42s7" [76930a49-6a8a-4d02-84b8-8e26f3196ac3] Running
	I0924 19:52:22.327652   69408 system_pods.go:89] "kube-scheduler-embed-certs-311319" [22d20361-552d-4443-bec2-e406919d2966] Running
	I0924 19:52:22.327662   69408 system_pods.go:89] "metrics-server-6867b74b74-xnwm4" [dc64f26b-e4a6-4692-83d5-e6c794c1b130] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:52:22.327671   69408 system_pods.go:89] "storage-provisioner" [766bdfe2-684a-47de-94fd-088795b60e2b] Running
	I0924 19:52:22.327680   69408 system_pods.go:126] duration metric: took 203.762675ms to wait for k8s-apps to be running ...
	I0924 19:52:22.327687   69408 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:52:22.327741   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:52:22.340873   69408 system_svc.go:56] duration metric: took 13.177605ms WaitForService to wait for kubelet
	I0924 19:52:22.340903   69408 kubeadm.go:582] duration metric: took 7.674755249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:52:22.340925   69408 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:52:22.524647   69408 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:52:22.524670   69408 node_conditions.go:123] node cpu capacity is 2
	I0924 19:52:22.524679   69408 node_conditions.go:105] duration metric: took 183.74973ms to run NodePressure ...
	I0924 19:52:22.524688   69408 start.go:241] waiting for startup goroutines ...
	I0924 19:52:22.524695   69408 start.go:246] waiting for cluster config update ...
	I0924 19:52:22.524705   69408 start.go:255] writing updated cluster config ...
	I0924 19:52:22.524994   69408 ssh_runner.go:195] Run: rm -f paused
	I0924 19:52:22.571765   69408 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:52:22.574724   69408 out.go:177] * Done! kubectl is now configured to use "embed-certs-311319" cluster and "default" namespace by default
	I0924 19:52:25.866986   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:52:25.867227   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:05.868563   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:05.868798   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:05.868811   70152 kubeadm.go:310] 
	I0924 19:53:05.868866   70152 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:53:05.868927   70152 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:53:05.868936   70152 kubeadm.go:310] 
	I0924 19:53:05.868989   70152 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:53:05.869037   70152 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:53:05.869201   70152 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:53:05.869212   70152 kubeadm.go:310] 
	I0924 19:53:05.869332   70152 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:53:05.869380   70152 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:53:05.869433   70152 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:53:05.869442   70152 kubeadm.go:310] 
	I0924 19:53:05.869555   70152 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:53:05.869664   70152 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:53:05.869674   70152 kubeadm.go:310] 
	I0924 19:53:05.869792   70152 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:53:05.869900   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:53:05.870003   70152 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:53:05.870132   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:53:05.870172   70152 kubeadm.go:310] 
	I0924 19:53:05.870425   70152 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:53:05.870536   70152 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:53:05.870658   70152 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 19:53:05.870869   70152 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 19:53:05.870918   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:53:06.301673   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:53:06.316103   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:53:06.326362   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:53:06.326396   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:53:06.326454   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:53:06.334687   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:53:06.334744   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:53:06.344175   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:53:06.352663   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:53:06.352725   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:53:06.361955   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:53:06.370584   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:53:06.370625   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:53:06.379590   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:53:06.388768   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:53:06.388825   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:53:06.397242   70152 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:53:06.469463   70152 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:53:06.469547   70152 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:53:06.606743   70152 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:53:06.606900   70152 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:53:06.607021   70152 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:53:06.778104   70152 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:53:06.780036   70152 out.go:235]   - Generating certificates and keys ...
	I0924 19:53:06.780148   70152 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:53:06.780241   70152 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:53:06.780359   70152 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:53:06.780451   70152 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:53:06.780578   70152 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:53:06.780654   70152 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:53:06.780753   70152 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:53:06.780852   70152 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:53:06.780972   70152 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:53:06.781119   70152 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:53:06.781178   70152 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:53:06.781254   70152 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:53:06.836315   70152 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:53:06.938657   70152 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:53:07.273070   70152 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:53:07.347309   70152 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:53:07.369112   70152 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:53:07.369777   70152 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:53:07.369866   70152 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:53:07.504122   70152 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:53:07.506006   70152 out.go:235]   - Booting up control plane ...
	I0924 19:53:07.506117   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:53:07.509132   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:53:07.509998   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:53:07.510662   70152 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:53:07.513856   70152 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:53:47.515377   70152 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:53:47.515684   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:47.515976   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:52.516646   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:52.516842   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:54:02.517539   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:54:02.517808   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:54:22.518364   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:54:22.518605   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:55:02.517378   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:55:02.517642   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:55:02.517672   70152 kubeadm.go:310] 
	I0924 19:55:02.517732   70152 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:55:02.517791   70152 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:55:02.517802   70152 kubeadm.go:310] 
	I0924 19:55:02.517880   70152 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:55:02.517943   70152 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:55:02.518090   70152 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:55:02.518102   70152 kubeadm.go:310] 
	I0924 19:55:02.518239   70152 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:55:02.518289   70152 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:55:02.518347   70152 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:55:02.518358   70152 kubeadm.go:310] 
	I0924 19:55:02.518488   70152 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:55:02.518565   70152 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:55:02.518572   70152 kubeadm.go:310] 
	I0924 19:55:02.518685   70152 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:55:02.518768   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:55:02.518891   70152 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:55:02.518991   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:55:02.519010   70152 kubeadm.go:310] 
	I0924 19:55:02.519626   70152 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:55:02.519745   70152 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:55:02.519839   70152 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 19:55:02.519914   70152 kubeadm.go:394] duration metric: took 8m1.249852968s to StartCluster
	I0924 19:55:02.519952   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:55:02.520008   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:55:02.552844   70152 cri.go:89] found id: ""
	I0924 19:55:02.552880   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.552891   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:55:02.552899   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:55:02.552956   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:55:02.582811   70152 cri.go:89] found id: ""
	I0924 19:55:02.582858   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.582869   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:55:02.582876   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:55:02.582929   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:55:02.614815   70152 cri.go:89] found id: ""
	I0924 19:55:02.614858   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.614868   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:55:02.614874   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:55:02.614920   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:55:02.644953   70152 cri.go:89] found id: ""
	I0924 19:55:02.644982   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.644991   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:55:02.644998   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:55:02.645053   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:55:02.680419   70152 cri.go:89] found id: ""
	I0924 19:55:02.680448   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.680458   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:55:02.680466   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:55:02.680525   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:55:02.713021   70152 cri.go:89] found id: ""
	I0924 19:55:02.713043   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.713051   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:55:02.713057   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:55:02.713118   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:55:02.748326   70152 cri.go:89] found id: ""
	I0924 19:55:02.748350   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.748358   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:55:02.748364   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:55:02.748416   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:55:02.780489   70152 cri.go:89] found id: ""
	I0924 19:55:02.780523   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.780546   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:55:02.780558   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:55:02.780572   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:55:02.830514   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:55:02.830550   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:55:02.845321   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:55:02.845349   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:55:02.909352   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:55:02.909383   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:55:02.909399   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:55:03.033937   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:55:03.033972   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0924 19:55:03.070531   70152 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 19:55:03.070611   70152 out.go:270] * 
	W0924 19:55:03.070682   70152 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:55:03.070701   70152 out.go:270] * 
	W0924 19:55:03.071559   70152 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 19:55:03.074921   70152 out.go:201] 
	W0924 19:55:03.076106   70152 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:55:03.076150   70152 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 19:55:03.076180   70152 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 19:55:03.077787   70152 out.go:201] 
	
	
	==> CRI-O <==
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.221875486Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208248221857310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c166245-75d3-42d1-98b1-cc351b815f56 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.222368601Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7097c71-bf7c-4f5a-8a39-558f3c72af6b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.222415982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7097c71-bf7c-4f5a-8a39-558f3c72af6b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.222445791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a7097c71-bf7c-4f5a-8a39-558f3c72af6b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.251688291Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=deb74cf4-38ae-410c-84bc-e27e10c58a46 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.251757817Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=deb74cf4-38ae-410c-84bc-e27e10c58a46 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.252874551Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=900e06c4-8f73-4cdc-9274-5c89b712415f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.253374099Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208248253321403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=900e06c4-8f73-4cdc-9274-5c89b712415f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.253793707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=12c5a07e-0d16-43a6-a5e4-0f64c5795699 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.253836260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=12c5a07e-0d16-43a6-a5e4-0f64c5795699 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.253869292Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=12c5a07e-0d16-43a6-a5e4-0f64c5795699 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.284504064Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ba1c0f6-8d92-4e3b-9165-121b42a8bedc name=/runtime.v1.RuntimeService/Version
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.284581194Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ba1c0f6-8d92-4e3b-9165-121b42a8bedc name=/runtime.v1.RuntimeService/Version
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.285489190Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81e2e4ae-4ff1-4b52-9260-eead02271c06 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.285834749Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208248285816356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81e2e4ae-4ff1-4b52-9260-eead02271c06 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.286529393Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d684415-63d9-4951-b844-c04c93622b06 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.286594508Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d684415-63d9-4951-b844-c04c93622b06 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.286648572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6d684415-63d9-4951-b844-c04c93622b06 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.317586299Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a004708-cbe3-4f76-8ed5-715bceea70fa name=/runtime.v1.RuntimeService/Version
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.317657991Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a004708-cbe3-4f76-8ed5-715bceea70fa name=/runtime.v1.RuntimeService/Version
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.318478087Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c22443cb-2e79-454d-8e29-acc5fe89c18a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.318826861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208248318808302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c22443cb-2e79-454d-8e29-acc5fe89c18a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.319678160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=478097a4-61b4-45bd-a882-4ff4e8b7433a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.319727363Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=478097a4-61b4-45bd-a882-4ff4e8b7433a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:04:08 old-k8s-version-510301 crio[623]: time="2024-09-24 20:04:08.319756816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=478097a4-61b4-45bd-a882-4ff4e8b7433a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep24 19:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048604] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037476] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.005649] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.876766] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.596648] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.634241] systemd-fstab-generator[549]: Ignoring "noauto" option for root device
	[  +0.054570] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058966] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.197243] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.130135] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.272038] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[Sep24 19:47] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.061152] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.778061] systemd-fstab-generator[996]: Ignoring "noauto" option for root device
	[ +15.063261] kauditd_printk_skb: 46 callbacks suppressed
	[Sep24 19:51] systemd-fstab-generator[5117]: Ignoring "noauto" option for root device
	[Sep24 19:53] systemd-fstab-generator[5405]: Ignoring "noauto" option for root device
	[  +0.064427] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:04:08 up 17 min,  0 users,  load average: 0.00, 0.02, 0.04
	Linux old-k8s-version-510301 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]: net.(*Resolver).lookupIPAddr(0x70c5740, 0x4f7fe40, 0xc000cac720, 0x48ab5d6, 0x3, 0xc000c1dc80, 0x1f, 0x20fb, 0x0, 0x0, ...)
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]:         /usr/local/go/src/net/lookup.go:299 +0x685
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc000cac720, 0x48ab5d6, 0x3, 0xc000c1dc80, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000cac720, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000c1dc80, 0x24, 0x0, ...)
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]: net.(*Dialer).DialContext(0xc000b0fce0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c1dc80, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b32dc0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c1dc80, 0x24, 0x60, 0x7fc23b739fa8, 0x118, ...)
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]: net/http.(*Transport).dial(0xc000612000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c1dc80, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]: net/http.(*Transport).dialConn(0xc000612000, 0x4f7fe00, 0xc000120018, 0x0, 0xc0009d2180, 0x5, 0xc000c1dc80, 0x24, 0x0, 0xc000c0eea0, ...)
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]: net/http.(*Transport).dialConnFor(0xc000612000, 0xc000c10420)
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]: created by net/http.(*Transport).queueForDial
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]: goroutine 150 [runnable]:
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc000c60aa0, 0xc000c1dcb0, 0x23, 0xc000b66c40)
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]:         /usr/local/go/src/internal/singleflight/singleflight.go:94
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]: created by internal/singleflight.(*Group).DoChan
	Sep 24 20:04:08 old-k8s-version-510301 kubelet[6602]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Sep 24 20:04:08 old-k8s-version-510301 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 24 20:04:08 old-k8s-version-510301 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510301 -n old-k8s-version-510301
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510301 -n old-k8s-version-510301: exit status 2 (225.606738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-510301" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (425.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-965745 -n no-preload-965745
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-24 20:07:13.45874131 +0000 UTC m=+6442.560753005
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-965745 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-965745 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.606µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-965745 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965745 -n no-preload-965745
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-965745 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-965745 logs -n 25: (1.118837418s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:38 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo find                             | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo crio                             | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-038637                                       | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-119609 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | disable-driver-mounts-119609                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:39 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-311319            | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-965745             | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-093771  | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-510301        | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-311319                 | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-965745                  | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-093771       | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:51 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-510301             | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 20:06 UTC | 24 Sep 24 20:06 UTC |
	| start   | -p newest-cni-813973 --memory=2200 --alsologtostderr   | newest-cni-813973            | jenkins | v1.34.0 | 24 Sep 24 20:06 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 20:06:38
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 20:06:38.553344   76425 out.go:345] Setting OutFile to fd 1 ...
	I0924 20:06:38.553575   76425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 20:06:38.553583   76425 out.go:358] Setting ErrFile to fd 2...
	I0924 20:06:38.553588   76425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 20:06:38.553810   76425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 20:06:38.554450   76425 out.go:352] Setting JSON to false
	I0924 20:06:38.555626   76425 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6550,"bootTime":1727201849,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 20:06:38.555737   76425 start.go:139] virtualization: kvm guest
	I0924 20:06:38.558092   76425 out.go:177] * [newest-cni-813973] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 20:06:38.559423   76425 notify.go:220] Checking for updates...
	I0924 20:06:38.559429   76425 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 20:06:38.561114   76425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 20:06:38.562298   76425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 20:06:38.563597   76425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 20:06:38.565041   76425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 20:06:38.566514   76425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 20:06:38.568194   76425 config.go:182] Loaded profile config "default-k8s-diff-port-093771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 20:06:38.568310   76425 config.go:182] Loaded profile config "embed-certs-311319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 20:06:38.568413   76425 config.go:182] Loaded profile config "no-preload-965745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 20:06:38.568549   76425 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 20:06:38.605849   76425 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 20:06:38.607105   76425 start.go:297] selected driver: kvm2
	I0924 20:06:38.607130   76425 start.go:901] validating driver "kvm2" against <nil>
	I0924 20:06:38.607144   76425 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 20:06:38.607905   76425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 20:06:38.607983   76425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 20:06:38.623524   76425 install.go:137] /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 20:06:38.623575   76425 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0924 20:06:38.623624   76425 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0924 20:06:38.623886   76425 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0924 20:06:38.623917   76425 cni.go:84] Creating CNI manager for ""
	I0924 20:06:38.623959   76425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 20:06:38.623968   76425 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 20:06:38.624011   76425 start.go:340] cluster config:
	{Name:newest-cni-813973 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-813973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 20:06:38.624096   76425 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 20:06:38.626062   76425 out.go:177] * Starting "newest-cni-813973" primary control-plane node in "newest-cni-813973" cluster
	I0924 20:06:38.627351   76425 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 20:06:38.627388   76425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 20:06:38.627395   76425 cache.go:56] Caching tarball of preloaded images
	I0924 20:06:38.627446   76425 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 20:06:38.627456   76425 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 20:06:38.627534   76425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/config.json ...
	I0924 20:06:38.627551   76425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/config.json: {Name:mkb1196762f4c9aa9a83bb92eee1f51551659007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:06:38.627672   76425 start.go:360] acquireMachinesLock for newest-cni-813973: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 20:06:38.627698   76425 start.go:364] duration metric: took 14.172µs to acquireMachinesLock for "newest-cni-813973"
	I0924 20:06:38.627713   76425 start.go:93] Provisioning new machine with config: &{Name:newest-cni-813973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-813973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 20:06:38.627768   76425 start.go:125] createHost starting for "" (driver="kvm2")
	I0924 20:06:38.629371   76425 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 20:06:38.629509   76425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 20:06:38.629546   76425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 20:06:38.645119   76425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46647
	I0924 20:06:38.645563   76425 main.go:141] libmachine: () Calling .GetVersion
	I0924 20:06:38.646112   76425 main.go:141] libmachine: Using API Version  1
	I0924 20:06:38.646132   76425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 20:06:38.646450   76425 main.go:141] libmachine: () Calling .GetMachineName
	I0924 20:06:38.646660   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetMachineName
	I0924 20:06:38.646795   76425 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:06:38.647004   76425 start.go:159] libmachine.API.Create for "newest-cni-813973" (driver="kvm2")
	I0924 20:06:38.647026   76425 client.go:168] LocalClient.Create starting
	I0924 20:06:38.647051   76425 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem
	I0924 20:06:38.647079   76425 main.go:141] libmachine: Decoding PEM data...
	I0924 20:06:38.647091   76425 main.go:141] libmachine: Parsing certificate...
	I0924 20:06:38.647131   76425 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem
	I0924 20:06:38.647150   76425 main.go:141] libmachine: Decoding PEM data...
	I0924 20:06:38.647182   76425 main.go:141] libmachine: Parsing certificate...
	I0924 20:06:38.647199   76425 main.go:141] libmachine: Running pre-create checks...
	I0924 20:06:38.647207   76425 main.go:141] libmachine: (newest-cni-813973) Calling .PreCreateCheck
	I0924 20:06:38.647534   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetConfigRaw
	I0924 20:06:38.647945   76425 main.go:141] libmachine: Creating machine...
	I0924 20:06:38.647963   76425 main.go:141] libmachine: (newest-cni-813973) Calling .Create
	I0924 20:06:38.648083   76425 main.go:141] libmachine: (newest-cni-813973) Creating KVM machine...
	I0924 20:06:38.649249   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found existing default KVM network
	I0924 20:06:38.650353   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:38.650202   76448 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:8b:ff:14} reservation:<nil>}
	I0924 20:06:38.651220   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:38.651154   76448 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c4:59:7d} reservation:<nil>}
	I0924 20:06:38.652002   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:38.651905   76448 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ca:32:b9} reservation:<nil>}
	I0924 20:06:38.653008   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:38.652952   76448 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028b890}
	I0924 20:06:38.653089   76425 main.go:141] libmachine: (newest-cni-813973) DBG | created network xml: 
	I0924 20:06:38.653110   76425 main.go:141] libmachine: (newest-cni-813973) DBG | <network>
	I0924 20:06:38.653121   76425 main.go:141] libmachine: (newest-cni-813973) DBG |   <name>mk-newest-cni-813973</name>
	I0924 20:06:38.653131   76425 main.go:141] libmachine: (newest-cni-813973) DBG |   <dns enable='no'/>
	I0924 20:06:38.653139   76425 main.go:141] libmachine: (newest-cni-813973) DBG |   
	I0924 20:06:38.653149   76425 main.go:141] libmachine: (newest-cni-813973) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0924 20:06:38.653156   76425 main.go:141] libmachine: (newest-cni-813973) DBG |     <dhcp>
	I0924 20:06:38.653164   76425 main.go:141] libmachine: (newest-cni-813973) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0924 20:06:38.653169   76425 main.go:141] libmachine: (newest-cni-813973) DBG |     </dhcp>
	I0924 20:06:38.653173   76425 main.go:141] libmachine: (newest-cni-813973) DBG |   </ip>
	I0924 20:06:38.653179   76425 main.go:141] libmachine: (newest-cni-813973) DBG |   
	I0924 20:06:38.653183   76425 main.go:141] libmachine: (newest-cni-813973) DBG | </network>
	I0924 20:06:38.653203   76425 main.go:141] libmachine: (newest-cni-813973) DBG | 
	I0924 20:06:38.658465   76425 main.go:141] libmachine: (newest-cni-813973) DBG | trying to create private KVM network mk-newest-cni-813973 192.168.72.0/24...
	I0924 20:06:38.729165   76425 main.go:141] libmachine: (newest-cni-813973) DBG | private KVM network mk-newest-cni-813973 192.168.72.0/24 created
	I0924 20:06:38.729252   76425 main.go:141] libmachine: (newest-cni-813973) Setting up store path in /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973 ...
	I0924 20:06:38.729276   76425 main.go:141] libmachine: (newest-cni-813973) Building disk image from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 20:06:38.729290   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:38.729216   76448 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 20:06:38.729426   76425 main.go:141] libmachine: (newest-cni-813973) Downloading /home/jenkins/minikube-integration/19700-3751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 20:06:38.981174   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:38.981033   76448 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa...
	I0924 20:06:39.153392   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:39.153281   76448 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/newest-cni-813973.rawdisk...
	I0924 20:06:39.153423   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Writing magic tar header
	I0924 20:06:39.153439   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Writing SSH key tar header
	I0924 20:06:39.153450   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:39.153400   76448 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973 ...
	I0924 20:06:39.153512   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973
	I0924 20:06:39.153543   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines
	I0924 20:06:39.153558   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 20:06:39.153616   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751
	I0924 20:06:39.153629   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 20:06:39.153643   76425 main.go:141] libmachine: (newest-cni-813973) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973 (perms=drwx------)
	I0924 20:06:39.153655   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Checking permissions on dir: /home/jenkins
	I0924 20:06:39.153667   76425 main.go:141] libmachine: (newest-cni-813973) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines (perms=drwxr-xr-x)
	I0924 20:06:39.153684   76425 main.go:141] libmachine: (newest-cni-813973) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube (perms=drwxr-xr-x)
	I0924 20:06:39.153698   76425 main.go:141] libmachine: (newest-cni-813973) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751 (perms=drwxrwxr-x)
	I0924 20:06:39.153738   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Checking permissions on dir: /home
	I0924 20:06:39.153770   76425 main.go:141] libmachine: (newest-cni-813973) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 20:06:39.153784   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Skipping /home - not owner
	I0924 20:06:39.153804   76425 main.go:141] libmachine: (newest-cni-813973) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 20:06:39.153812   76425 main.go:141] libmachine: (newest-cni-813973) Creating domain...
	I0924 20:06:39.154787   76425 main.go:141] libmachine: (newest-cni-813973) define libvirt domain using xml: 
	I0924 20:06:39.154804   76425 main.go:141] libmachine: (newest-cni-813973) <domain type='kvm'>
	I0924 20:06:39.154815   76425 main.go:141] libmachine: (newest-cni-813973)   <name>newest-cni-813973</name>
	I0924 20:06:39.154821   76425 main.go:141] libmachine: (newest-cni-813973)   <memory unit='MiB'>2200</memory>
	I0924 20:06:39.154854   76425 main.go:141] libmachine: (newest-cni-813973)   <vcpu>2</vcpu>
	I0924 20:06:39.154869   76425 main.go:141] libmachine: (newest-cni-813973)   <features>
	I0924 20:06:39.154882   76425 main.go:141] libmachine: (newest-cni-813973)     <acpi/>
	I0924 20:06:39.154892   76425 main.go:141] libmachine: (newest-cni-813973)     <apic/>
	I0924 20:06:39.154919   76425 main.go:141] libmachine: (newest-cni-813973)     <pae/>
	I0924 20:06:39.154943   76425 main.go:141] libmachine: (newest-cni-813973)     
	I0924 20:06:39.154954   76425 main.go:141] libmachine: (newest-cni-813973)   </features>
	I0924 20:06:39.154967   76425 main.go:141] libmachine: (newest-cni-813973)   <cpu mode='host-passthrough'>
	I0924 20:06:39.155002   76425 main.go:141] libmachine: (newest-cni-813973)   
	I0924 20:06:39.155029   76425 main.go:141] libmachine: (newest-cni-813973)   </cpu>
	I0924 20:06:39.155039   76425 main.go:141] libmachine: (newest-cni-813973)   <os>
	I0924 20:06:39.155046   76425 main.go:141] libmachine: (newest-cni-813973)     <type>hvm</type>
	I0924 20:06:39.155055   76425 main.go:141] libmachine: (newest-cni-813973)     <boot dev='cdrom'/>
	I0924 20:06:39.155070   76425 main.go:141] libmachine: (newest-cni-813973)     <boot dev='hd'/>
	I0924 20:06:39.155083   76425 main.go:141] libmachine: (newest-cni-813973)     <bootmenu enable='no'/>
	I0924 20:06:39.155092   76425 main.go:141] libmachine: (newest-cni-813973)   </os>
	I0924 20:06:39.155100   76425 main.go:141] libmachine: (newest-cni-813973)   <devices>
	I0924 20:06:39.155110   76425 main.go:141] libmachine: (newest-cni-813973)     <disk type='file' device='cdrom'>
	I0924 20:06:39.155124   76425 main.go:141] libmachine: (newest-cni-813973)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/boot2docker.iso'/>
	I0924 20:06:39.155135   76425 main.go:141] libmachine: (newest-cni-813973)       <target dev='hdc' bus='scsi'/>
	I0924 20:06:39.155144   76425 main.go:141] libmachine: (newest-cni-813973)       <readonly/>
	I0924 20:06:39.155157   76425 main.go:141] libmachine: (newest-cni-813973)     </disk>
	I0924 20:06:39.155168   76425 main.go:141] libmachine: (newest-cni-813973)     <disk type='file' device='disk'>
	I0924 20:06:39.155180   76425 main.go:141] libmachine: (newest-cni-813973)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 20:06:39.155197   76425 main.go:141] libmachine: (newest-cni-813973)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/newest-cni-813973.rawdisk'/>
	I0924 20:06:39.155208   76425 main.go:141] libmachine: (newest-cni-813973)       <target dev='hda' bus='virtio'/>
	I0924 20:06:39.155219   76425 main.go:141] libmachine: (newest-cni-813973)     </disk>
	I0924 20:06:39.155233   76425 main.go:141] libmachine: (newest-cni-813973)     <interface type='network'>
	I0924 20:06:39.155245   76425 main.go:141] libmachine: (newest-cni-813973)       <source network='mk-newest-cni-813973'/>
	I0924 20:06:39.155258   76425 main.go:141] libmachine: (newest-cni-813973)       <model type='virtio'/>
	I0924 20:06:39.155270   76425 main.go:141] libmachine: (newest-cni-813973)     </interface>
	I0924 20:06:39.155279   76425 main.go:141] libmachine: (newest-cni-813973)     <interface type='network'>
	I0924 20:06:39.155287   76425 main.go:141] libmachine: (newest-cni-813973)       <source network='default'/>
	I0924 20:06:39.155297   76425 main.go:141] libmachine: (newest-cni-813973)       <model type='virtio'/>
	I0924 20:06:39.155306   76425 main.go:141] libmachine: (newest-cni-813973)     </interface>
	I0924 20:06:39.155316   76425 main.go:141] libmachine: (newest-cni-813973)     <serial type='pty'>
	I0924 20:06:39.155324   76425 main.go:141] libmachine: (newest-cni-813973)       <target port='0'/>
	I0924 20:06:39.155333   76425 main.go:141] libmachine: (newest-cni-813973)     </serial>
	I0924 20:06:39.155342   76425 main.go:141] libmachine: (newest-cni-813973)     <console type='pty'>
	I0924 20:06:39.155353   76425 main.go:141] libmachine: (newest-cni-813973)       <target type='serial' port='0'/>
	I0924 20:06:39.155363   76425 main.go:141] libmachine: (newest-cni-813973)     </console>
	I0924 20:06:39.155377   76425 main.go:141] libmachine: (newest-cni-813973)     <rng model='virtio'>
	I0924 20:06:39.155394   76425 main.go:141] libmachine: (newest-cni-813973)       <backend model='random'>/dev/random</backend>
	I0924 20:06:39.155404   76425 main.go:141] libmachine: (newest-cni-813973)     </rng>
	I0924 20:06:39.155411   76425 main.go:141] libmachine: (newest-cni-813973)     
	I0924 20:06:39.155420   76425 main.go:141] libmachine: (newest-cni-813973)     
	I0924 20:06:39.155427   76425 main.go:141] libmachine: (newest-cni-813973)   </devices>
	I0924 20:06:39.155439   76425 main.go:141] libmachine: (newest-cni-813973) </domain>
	I0924 20:06:39.155470   76425 main.go:141] libmachine: (newest-cni-813973) 
	I0924 20:06:39.159726   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:b3:53:a8 in network default
	I0924 20:06:39.160410   76425 main.go:141] libmachine: (newest-cni-813973) Ensuring networks are active...
	I0924 20:06:39.160427   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:39.161251   76425 main.go:141] libmachine: (newest-cni-813973) Ensuring network default is active
	I0924 20:06:39.161599   76425 main.go:141] libmachine: (newest-cni-813973) Ensuring network mk-newest-cni-813973 is active
	I0924 20:06:39.162092   76425 main.go:141] libmachine: (newest-cni-813973) Getting domain xml...
	I0924 20:06:39.162781   76425 main.go:141] libmachine: (newest-cni-813973) Creating domain...
	I0924 20:06:40.398242   76425 main.go:141] libmachine: (newest-cni-813973) Waiting to get IP...
	I0924 20:06:40.399056   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:40.399428   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:40.399477   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:40.399433   76448 retry.go:31] will retry after 267.563635ms: waiting for machine to come up
	I0924 20:06:40.668985   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:40.669537   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:40.669573   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:40.669494   76448 retry.go:31] will retry after 317.275135ms: waiting for machine to come up
	I0924 20:06:40.987807   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:40.988375   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:40.988396   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:40.988337   76448 retry.go:31] will retry after 338.545245ms: waiting for machine to come up
	I0924 20:06:41.328732   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:41.329217   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:41.329242   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:41.329168   76448 retry.go:31] will retry after 380.674308ms: waiting for machine to come up
	I0924 20:06:41.711843   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:41.712301   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:41.712345   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:41.712276   76448 retry.go:31] will retry after 697.511199ms: waiting for machine to come up
	I0924 20:06:42.411234   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:42.411714   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:42.411742   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:42.411674   76448 retry.go:31] will retry after 769.238862ms: waiting for machine to come up
	I0924 20:06:43.182759   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:43.183241   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:43.183266   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:43.183187   76448 retry.go:31] will retry after 740.100584ms: waiting for machine to come up
	I0924 20:06:43.924193   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:43.924619   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:43.924647   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:43.924577   76448 retry.go:31] will retry after 1.472622128s: waiting for machine to come up
	I0924 20:06:45.398527   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:45.399072   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:45.399097   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:45.399028   76448 retry.go:31] will retry after 1.125610234s: waiting for machine to come up
	I0924 20:06:46.526386   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:46.526930   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:46.526972   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:46.526895   76448 retry.go:31] will retry after 2.047140109s: waiting for machine to come up
	I0924 20:06:48.575969   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:48.576384   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:48.576402   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:48.576355   76448 retry.go:31] will retry after 2.412422032s: waiting for machine to come up
	I0924 20:06:50.991542   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:50.992043   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:50.992068   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:50.991993   76448 retry.go:31] will retry after 2.278571042s: waiting for machine to come up
	I0924 20:06:53.271829   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:53.272246   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:53.272266   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:53.272215   76448 retry.go:31] will retry after 4.30479683s: waiting for machine to come up
	I0924 20:06:57.581883   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:57.582356   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:57.582401   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:57.582324   76448 retry.go:31] will retry after 4.135199459s: waiting for machine to come up
	I0924 20:07:01.720860   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:01.721263   76425 main.go:141] libmachine: (newest-cni-813973) Found IP for machine: 192.168.72.187
	I0924 20:07:01.721299   76425 main.go:141] libmachine: (newest-cni-813973) Reserving static IP address...
	I0924 20:07:01.721313   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has current primary IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:01.721643   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find host DHCP lease matching {name: "newest-cni-813973", mac: "52:54:00:ae:f7:44", ip: "192.168.72.187"} in network mk-newest-cni-813973
	I0924 20:07:01.798268   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Getting to WaitForSSH function...
	I0924 20:07:01.798297   76425 main.go:141] libmachine: (newest-cni-813973) Reserved static IP address: 192.168.72.187
	I0924 20:07:01.798310   76425 main.go:141] libmachine: (newest-cni-813973) Waiting for SSH to be available...
	I0924 20:07:01.801159   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:01.801553   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973
	I0924 20:07:01.801584   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find defined IP address of network mk-newest-cni-813973 interface with MAC address 52:54:00:ae:f7:44
	I0924 20:07:01.801697   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Using SSH client type: external
	I0924 20:07:01.801729   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa (-rw-------)
	I0924 20:07:01.801808   76425 main.go:141] libmachine: (newest-cni-813973) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 20:07:01.801830   76425 main.go:141] libmachine: (newest-cni-813973) DBG | About to run SSH command:
	I0924 20:07:01.801857   76425 main.go:141] libmachine: (newest-cni-813973) DBG | exit 0
	I0924 20:07:01.805619   76425 main.go:141] libmachine: (newest-cni-813973) DBG | SSH cmd err, output: exit status 255: 
	I0924 20:07:01.805637   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0924 20:07:01.805647   76425 main.go:141] libmachine: (newest-cni-813973) DBG | command : exit 0
	I0924 20:07:01.805655   76425 main.go:141] libmachine: (newest-cni-813973) DBG | err     : exit status 255
	I0924 20:07:01.805665   76425 main.go:141] libmachine: (newest-cni-813973) DBG | output  : 
	I0924 20:07:04.806349   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Getting to WaitForSSH function...
	I0924 20:07:04.809046   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:04.809373   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:04.809403   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:04.809538   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Using SSH client type: external
	I0924 20:07:04.809562   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa (-rw-------)
	I0924 20:07:04.809621   76425 main.go:141] libmachine: (newest-cni-813973) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 20:07:04.809638   76425 main.go:141] libmachine: (newest-cni-813973) DBG | About to run SSH command:
	I0924 20:07:04.809650   76425 main.go:141] libmachine: (newest-cni-813973) DBG | exit 0
	I0924 20:07:04.934504   76425 main.go:141] libmachine: (newest-cni-813973) DBG | SSH cmd err, output: <nil>: 
	I0924 20:07:04.934777   76425 main.go:141] libmachine: (newest-cni-813973) KVM machine creation complete!
	I0924 20:07:04.935148   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetConfigRaw
	I0924 20:07:04.935709   76425 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:04.935883   76425 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:04.936064   76425 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 20:07:04.936081   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetState
	I0924 20:07:04.937339   76425 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 20:07:04.937354   76425 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 20:07:04.937361   76425 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 20:07:04.937367   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:04.939869   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:04.940243   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:04.940271   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:04.940409   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:04.940589   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:04.940757   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:04.940904   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:04.941069   76425 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:04.941273   76425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:04.941290   76425 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 20:07:05.041786   76425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 20:07:05.041811   76425 main.go:141] libmachine: Detecting the provisioner...
	I0924 20:07:05.041821   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:05.044571   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.045051   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:05.045078   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.045405   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:05.046048   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.046364   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.046950   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:05.047182   76425 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:05.047362   76425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:05.047374   76425 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 20:07:05.151242   76425 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 20:07:05.151392   76425 main.go:141] libmachine: found compatible host: buildroot
	I0924 20:07:05.151408   76425 main.go:141] libmachine: Provisioning with buildroot...
	I0924 20:07:05.151420   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetMachineName
	I0924 20:07:05.151666   76425 buildroot.go:166] provisioning hostname "newest-cni-813973"
	I0924 20:07:05.151702   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetMachineName
	I0924 20:07:05.151893   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:05.154418   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.154793   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:05.154817   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.155016   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:05.155202   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.155342   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.155484   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:05.155787   76425 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:05.155967   76425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:05.155980   76425 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-813973 && echo "newest-cni-813973" | sudo tee /etc/hostname
	I0924 20:07:05.272474   76425 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-813973
	
	I0924 20:07:05.272497   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:05.275218   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.275579   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:05.275608   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.275763   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:05.275937   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.276103   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.276218   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:05.276367   76425 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:05.276525   76425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:05.276539   76425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-813973' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-813973/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-813973' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 20:07:05.386645   76425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 20:07:05.386680   76425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 20:07:05.386699   76425 buildroot.go:174] setting up certificates
	I0924 20:07:05.386708   76425 provision.go:84] configureAuth start
	I0924 20:07:05.386717   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetMachineName
	I0924 20:07:05.387014   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetIP
	I0924 20:07:05.389766   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.390090   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:05.390114   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.390231   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:05.392385   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.392665   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:05.392693   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.392805   76425 provision.go:143] copyHostCerts
	I0924 20:07:05.392880   76425 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 20:07:05.392893   76425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 20:07:05.392968   76425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 20:07:05.393075   76425 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 20:07:05.393086   76425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 20:07:05.393122   76425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 20:07:05.393197   76425 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 20:07:05.393206   76425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 20:07:05.393241   76425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 20:07:05.393301   76425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.newest-cni-813973 san=[127.0.0.1 192.168.72.187 localhost minikube newest-cni-813973]
	I0924 20:07:05.671120   76425 provision.go:177] copyRemoteCerts
	I0924 20:07:05.671197   76425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 20:07:05.671227   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:05.674366   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.674669   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:05.674696   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.674872   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:05.675075   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.675284   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:05.675432   76425 sshutil.go:53] new ssh client: &{IP:192.168.72.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa Username:docker}
	I0924 20:07:05.756026   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 20:07:05.784414   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 20:07:05.807253   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 20:07:05.830848   76425 provision.go:87] duration metric: took 444.109633ms to configureAuth
	I0924 20:07:05.830881   76425 buildroot.go:189] setting minikube options for container-runtime
	I0924 20:07:05.831064   76425 config.go:182] Loaded profile config "newest-cni-813973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 20:07:05.831133   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:05.833541   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.833869   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:05.833887   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.834065   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:05.834234   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.834369   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.834500   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:05.834637   76425 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:05.834798   76425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:05.834813   76425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 20:07:06.051828   76425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 20:07:06.051854   76425 main.go:141] libmachine: Checking connection to Docker...
	I0924 20:07:06.051865   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetURL
	I0924 20:07:06.053100   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Using libvirt version 6000000
	I0924 20:07:06.055625   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.056002   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:06.056028   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.056168   76425 main.go:141] libmachine: Docker is up and running!
	I0924 20:07:06.056181   76425 main.go:141] libmachine: Reticulating splines...
	I0924 20:07:06.056196   76425 client.go:171] duration metric: took 27.40916404s to LocalClient.Create
	I0924 20:07:06.056228   76425 start.go:167] duration metric: took 27.409224483s to libmachine.API.Create "newest-cni-813973"
	I0924 20:07:06.056240   76425 start.go:293] postStartSetup for "newest-cni-813973" (driver="kvm2")
	I0924 20:07:06.056252   76425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 20:07:06.056277   76425 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:06.056537   76425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 20:07:06.056566   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:06.058860   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.059141   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:06.059169   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.059273   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:06.059444   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:06.059598   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:06.059751   76425 sshutil.go:53] new ssh client: &{IP:192.168.72.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa Username:docker}
	I0924 20:07:06.144646   76425 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 20:07:06.148803   76425 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 20:07:06.148836   76425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 20:07:06.148927   76425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 20:07:06.149051   76425 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 20:07:06.149151   76425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 20:07:06.158045   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 20:07:06.180901   76425 start.go:296] duration metric: took 124.646985ms for postStartSetup
	I0924 20:07:06.180962   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetConfigRaw
	I0924 20:07:06.181638   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetIP
	I0924 20:07:06.184477   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.184843   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:06.184870   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.185071   76425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/config.json ...
	I0924 20:07:06.185311   76425 start.go:128] duration metric: took 27.557534549s to createHost
	I0924 20:07:06.185336   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:06.187604   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.187973   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:06.187993   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.188130   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:06.188318   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:06.188504   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:06.188668   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:06.188846   76425 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:06.189042   76425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:06.189057   76425 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 20:07:06.291287   76425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727208426.256685486
	
	I0924 20:07:06.291313   76425 fix.go:216] guest clock: 1727208426.256685486
	I0924 20:07:06.291324   76425 fix.go:229] Guest: 2024-09-24 20:07:06.256685486 +0000 UTC Remote: 2024-09-24 20:07:06.185324618 +0000 UTC m=+27.669581975 (delta=71.360868ms)
	I0924 20:07:06.291349   76425 fix.go:200] guest clock delta is within tolerance: 71.360868ms
	I0924 20:07:06.291356   76425 start.go:83] releasing machines lock for "newest-cni-813973", held for 27.663648785s
	I0924 20:07:06.291380   76425 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:06.291691   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetIP
	I0924 20:07:06.294218   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.294619   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:06.294645   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.294862   76425 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:06.295321   76425 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:06.295545   76425 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:06.295642   76425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 20:07:06.295695   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:06.295783   76425 ssh_runner.go:195] Run: cat /version.json
	I0924 20:07:06.295810   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:06.298492   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.298570   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.298818   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:06.298865   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.298893   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:06.298908   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.299009   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:06.299143   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:06.299210   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:06.299327   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:06.299391   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:06.299519   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:06.299525   76425 sshutil.go:53] new ssh client: &{IP:192.168.72.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa Username:docker}
	I0924 20:07:06.299628   76425 sshutil.go:53] new ssh client: &{IP:192.168.72.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa Username:docker}
	I0924 20:07:06.394621   76425 ssh_runner.go:195] Run: systemctl --version
	I0924 20:07:06.400556   76425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 20:07:06.553107   76425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 20:07:06.559200   76425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 20:07:06.559274   76425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 20:07:06.575699   76425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 20:07:06.575724   76425 start.go:495] detecting cgroup driver to use...
	I0924 20:07:06.575801   76425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 20:07:06.594770   76425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 20:07:06.608944   76425 docker.go:217] disabling cri-docker service (if available) ...
	I0924 20:07:06.609011   76425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 20:07:06.622604   76425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 20:07:06.636741   76425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 20:07:06.760362   76425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 20:07:06.911303   76425 docker.go:233] disabling docker service ...
	I0924 20:07:06.911376   76425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 20:07:06.925636   76425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 20:07:06.937650   76425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 20:07:07.092600   76425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 20:07:07.229337   76425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 20:07:07.242703   76425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 20:07:07.261191   76425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 20:07:07.261266   76425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:07.272213   76425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 20:07:07.272274   76425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:07.283467   76425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:07.293967   76425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:07.304422   76425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 20:07:07.314190   76425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:07.323587   76425 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:07.339697   76425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:07.349696   76425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 20:07:07.359128   76425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 20:07:07.359191   76425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 20:07:07.371644   76425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 20:07:07.380510   76425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 20:07:07.503465   76425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 20:07:07.596017   76425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 20:07:07.596095   76425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 20:07:07.600545   76425 start.go:563] Will wait 60s for crictl version
	I0924 20:07:07.600605   76425 ssh_runner.go:195] Run: which crictl
	I0924 20:07:07.604927   76425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 20:07:07.638945   76425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 20:07:07.639029   76425 ssh_runner.go:195] Run: crio --version
	I0924 20:07:07.665434   76425 ssh_runner.go:195] Run: crio --version
	I0924 20:07:07.692131   76425 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 20:07:07.693523   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetIP
	I0924 20:07:07.696344   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:07.696730   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:07.696755   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:07.696942   76425 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0924 20:07:07.700787   76425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 20:07:07.713808   76425 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0924 20:07:07.715312   76425 kubeadm.go:883] updating cluster {Name:newest-cni-813973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-813973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.187 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 20:07:07.715414   76425 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 20:07:07.715473   76425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 20:07:07.746745   76425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 20:07:07.746822   76425 ssh_runner.go:195] Run: which lz4
	I0924 20:07:07.750510   76425 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 20:07:07.754197   76425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 20:07:07.754230   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 20:07:08.919456   76425 crio.go:462] duration metric: took 1.168988302s to copy over tarball
	I0924 20:07:08.919520   76425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 20:07:10.889988   76425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.970442012s)
	I0924 20:07:10.890012   76425 crio.go:469] duration metric: took 1.970532772s to extract the tarball
	I0924 20:07:10.890021   76425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 20:07:10.927914   76425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 20:07:10.978115   76425 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 20:07:10.978134   76425 cache_images.go:84] Images are preloaded, skipping loading
	I0924 20:07:10.978142   76425 kubeadm.go:934] updating node { 192.168.72.187 8443 v1.31.1 crio true true} ...
	I0924 20:07:10.978266   76425 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-813973 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-813973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 20:07:10.978362   76425 ssh_runner.go:195] Run: crio config
	I0924 20:07:11.036930   76425 cni.go:84] Creating CNI manager for ""
	I0924 20:07:11.036951   76425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 20:07:11.036959   76425 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0924 20:07:11.036979   76425 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.187 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-813973 NodeName:newest-cni-813973 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 20:07:11.037106   76425 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-813973"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.187
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.187"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 20:07:11.037163   76425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 20:07:11.047054   76425 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 20:07:11.047128   76425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 20:07:11.056883   76425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0924 20:07:11.074947   76425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 20:07:11.090734   76425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0924 20:07:11.106539   76425 ssh_runner.go:195] Run: grep 192.168.72.187	control-plane.minikube.internal$ /etc/hosts
	I0924 20:07:11.110030   76425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.187	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 20:07:11.121758   76425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 20:07:11.245501   76425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 20:07:11.261204   76425 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973 for IP: 192.168.72.187
	I0924 20:07:11.261226   76425 certs.go:194] generating shared ca certs ...
	I0924 20:07:11.261246   76425 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:07:11.261454   76425 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 20:07:11.261519   76425 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 20:07:11.261536   76425 certs.go:256] generating profile certs ...
	I0924 20:07:11.261597   76425 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/client.key
	I0924 20:07:11.261626   76425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/client.crt with IP's: []
	I0924 20:07:11.445372   76425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/client.crt ...
	I0924 20:07:11.445399   76425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/client.crt: {Name:mk02198f79bf57c260d26b734ea22aa8f3f628e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:07:11.445593   76425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/client.key ...
	I0924 20:07:11.445607   76425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/client.key: {Name:mk76c67ce818e99f0f77f95f697fcab0ea369953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:07:11.445715   76425 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.key.da78465f
	I0924 20:07:11.445738   76425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.crt.da78465f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.187]
	I0924 20:07:11.549177   76425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.crt.da78465f ...
	I0924 20:07:11.549206   76425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.crt.da78465f: {Name:mke312c4bd31b33be33a00ef285941d3b770c863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:07:11.549373   76425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.key.da78465f ...
	I0924 20:07:11.549391   76425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.key.da78465f: {Name:mkc945c031c99d449634672489a28148f09be903 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:07:11.549484   76425 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.crt.da78465f -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.crt
	I0924 20:07:11.549605   76425 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.key.da78465f -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.key
	I0924 20:07:11.549688   76425 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.key
	I0924 20:07:11.549707   76425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.crt with IP's: []
	I0924 20:07:12.065999   76425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.crt ...
	I0924 20:07:12.066031   76425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.crt: {Name:mkd8a9abf68ce58cdf1cea7c32a6baf88e34e862 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:07:12.066194   76425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.key ...
	I0924 20:07:12.066207   76425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.key: {Name:mkef3e0872e81e61a1d34234f751fb86d1a18ce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:07:12.066375   76425 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 20:07:12.066411   76425 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 20:07:12.066421   76425 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 20:07:12.066445   76425 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 20:07:12.066472   76425 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 20:07:12.066494   76425 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 20:07:12.066528   76425 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 20:07:12.067067   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 20:07:12.103398   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 20:07:12.128148   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 20:07:12.152344   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 20:07:12.175290   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 20:07:12.198416   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 20:07:12.221995   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 20:07:12.244836   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 20:07:12.268723   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 20:07:12.290392   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 20:07:12.312901   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 20:07:12.335124   76425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 20:07:12.351527   76425 ssh_runner.go:195] Run: openssl version
	I0924 20:07:12.357263   76425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 20:07:12.368584   76425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 20:07:12.372680   76425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 20:07:12.372731   76425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 20:07:12.378250   76425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 20:07:12.389858   76425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 20:07:12.401969   76425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 20:07:12.407422   76425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 20:07:12.407492   76425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 20:07:12.413535   76425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 20:07:12.423829   76425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 20:07:12.433910   76425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 20:07:12.438013   76425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 20:07:12.438081   76425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 20:07:12.443963   76425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 20:07:12.454264   76425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 20:07:12.457882   76425 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 20:07:12.457932   76425 kubeadm.go:392] StartCluster: {Name:newest-cni-813973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-813973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.187 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 20:07:12.458015   76425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 20:07:12.458082   76425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 20:07:12.493508   76425 cri.go:89] found id: ""
	I0924 20:07:12.493587   76425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 20:07:12.503790   76425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 20:07:12.513883   76425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 20:07:12.522915   76425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 20:07:12.522937   76425 kubeadm.go:157] found existing configuration files:
	
	I0924 20:07:12.522979   76425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 20:07:12.532237   76425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 20:07:12.532317   76425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 20:07:12.542296   76425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 20:07:12.551220   76425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 20:07:12.551287   76425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 20:07:12.560206   76425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 20:07:12.568894   76425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 20:07:12.568966   76425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 20:07:12.578037   76425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 20:07:12.586622   76425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 20:07:12.586682   76425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 20:07:12.595625   76425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 20:07:12.693415   76425 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 20:07:12.693631   76425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 20:07:12.799057   76425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 20:07:12.799235   76425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 20:07:12.799367   76425 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 20:07:12.811671   76425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 20:07:13.106703   76425 out.go:235]   - Generating certificates and keys ...
	I0924 20:07:13.106814   76425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 20:07:13.106896   76425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 20:07:13.106986   76425 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 20:07:13.306654   76425 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 20:07:13.465032   76425 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	
	
	==> CRI-O <==
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.097830247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208434097809751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81b59ea7-92b1-4ea8-8017-1a0b5681eb31 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.098457921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e69b357-5c67-4f9d-8a08-10327c0426ff name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.098520933Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e69b357-5c67-4f9d-8a08-10327c0426ff name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.098704468Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d,PodSandboxId:376a3d2bc97fcc7fcd33727dcd123e61f6a732a1af2219b47b79997fc921f4e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207230692066103,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25f7a78-bc14-4613-aed5-ab00c8d39366,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51b423ff2358424fcd82804468580c6cc31f8bfde91d6513620d6c70d5270d7,PodSandboxId:290e5f0c006dac437c4a83e12c9615229b0e446b1f26c6df5577c01fdb69e5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727207209776928197,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 97f162b3-8eb4-4b04-af2b-978373632a7a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80,PodSandboxId:5ea281ea1825d20bde0a5d0fdba0488c82f3ecca30d953d478f3395c10f8e764,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207207600063381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qb2mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d38dedd6-6361-419c-891d-e5a5189776db,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba,PodSandboxId:376a3d2bc97fcc7fcd33727dcd123e61f6a732a1af2219b47b79997fc921f4e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727207200088517256,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
25f7a78-bc14-4613-aed5-ab00c8d39366,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8,PodSandboxId:2f178420dd0670c39ae0ed95268f85e8d3314d226b53436c74b203dfffeb9288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727207199935527912,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7520fc22-94af-4575-8df7-4476677d10
93,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d,PodSandboxId:587d6ccb1745ba76a332c8923a5f87bcd11edaf98522071c002a12e9bec09de3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727207195192787863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 174e1e03afada0c498597533826f4a8a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4,PodSandboxId:9944102a7ac497b62527c2f8c7f70f28a7165a70643d303dad4b8f2ef97c9ac5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727207195154747215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 763974fab4516ef5e6d93eab84e76559,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8,PodSandboxId:b7ea7f8497be3fdd67e55750cba82b260f2a74f128bce9087ebb337c97f1b1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727207195117744694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef486e4e6c075645e3229eeb7938b7a9,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca,PodSandboxId:885bb9e653bf5d3866fbe9ed02ecf7b68d086a88962702ea1615800d1b4cce2c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207195082144947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9dc2cb98884b703fb63a65b935124d,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e69b357-5c67-4f9d-8a08-10327c0426ff name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.134905031Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9aae9a2f-ab88-4962-92b5-8342f4eb78bf name=/runtime.v1.RuntimeService/Version
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.135171674Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9aae9a2f-ab88-4962-92b5-8342f4eb78bf name=/runtime.v1.RuntimeService/Version
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.136614408Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d4a529d-308d-458d-b22a-378c49a8668c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.136999057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208434136968333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d4a529d-308d-458d-b22a-378c49a8668c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.137568107Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37f8738b-0e55-4bf6-997d-084efb2d1a88 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.137637883Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37f8738b-0e55-4bf6-997d-084efb2d1a88 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.137842350Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d,PodSandboxId:376a3d2bc97fcc7fcd33727dcd123e61f6a732a1af2219b47b79997fc921f4e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207230692066103,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25f7a78-bc14-4613-aed5-ab00c8d39366,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51b423ff2358424fcd82804468580c6cc31f8bfde91d6513620d6c70d5270d7,PodSandboxId:290e5f0c006dac437c4a83e12c9615229b0e446b1f26c6df5577c01fdb69e5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727207209776928197,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 97f162b3-8eb4-4b04-af2b-978373632a7a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80,PodSandboxId:5ea281ea1825d20bde0a5d0fdba0488c82f3ecca30d953d478f3395c10f8e764,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207207600063381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qb2mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d38dedd6-6361-419c-891d-e5a5189776db,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba,PodSandboxId:376a3d2bc97fcc7fcd33727dcd123e61f6a732a1af2219b47b79997fc921f4e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727207200088517256,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
25f7a78-bc14-4613-aed5-ab00c8d39366,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8,PodSandboxId:2f178420dd0670c39ae0ed95268f85e8d3314d226b53436c74b203dfffeb9288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727207199935527912,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7520fc22-94af-4575-8df7-4476677d10
93,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d,PodSandboxId:587d6ccb1745ba76a332c8923a5f87bcd11edaf98522071c002a12e9bec09de3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727207195192787863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 174e1e03afada0c498597533826f4a8a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4,PodSandboxId:9944102a7ac497b62527c2f8c7f70f28a7165a70643d303dad4b8f2ef97c9ac5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727207195154747215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 763974fab4516ef5e6d93eab84e76559,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8,PodSandboxId:b7ea7f8497be3fdd67e55750cba82b260f2a74f128bce9087ebb337c97f1b1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727207195117744694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef486e4e6c075645e3229eeb7938b7a9,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca,PodSandboxId:885bb9e653bf5d3866fbe9ed02ecf7b68d086a88962702ea1615800d1b4cce2c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207195082144947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9dc2cb98884b703fb63a65b935124d,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37f8738b-0e55-4bf6-997d-084efb2d1a88 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.175994916Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b0d668e-2053-4b14-9ab9-d85964d79fdb name=/runtime.v1.RuntimeService/Version
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.176081856Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b0d668e-2053-4b14-9ab9-d85964d79fdb name=/runtime.v1.RuntimeService/Version
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.177181524Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=38beb4f8-0c1b-45f3-ad90-31dd67cdde92 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.177553050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208434177529651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38beb4f8-0c1b-45f3-ad90-31dd67cdde92 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.178349717Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37dc0826-2995-4f29-bdee-47f01942edac name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.178452278Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37dc0826-2995-4f29-bdee-47f01942edac name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.178662431Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d,PodSandboxId:376a3d2bc97fcc7fcd33727dcd123e61f6a732a1af2219b47b79997fc921f4e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207230692066103,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25f7a78-bc14-4613-aed5-ab00c8d39366,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51b423ff2358424fcd82804468580c6cc31f8bfde91d6513620d6c70d5270d7,PodSandboxId:290e5f0c006dac437c4a83e12c9615229b0e446b1f26c6df5577c01fdb69e5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727207209776928197,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 97f162b3-8eb4-4b04-af2b-978373632a7a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80,PodSandboxId:5ea281ea1825d20bde0a5d0fdba0488c82f3ecca30d953d478f3395c10f8e764,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207207600063381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qb2mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d38dedd6-6361-419c-891d-e5a5189776db,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba,PodSandboxId:376a3d2bc97fcc7fcd33727dcd123e61f6a732a1af2219b47b79997fc921f4e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727207200088517256,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
25f7a78-bc14-4613-aed5-ab00c8d39366,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8,PodSandboxId:2f178420dd0670c39ae0ed95268f85e8d3314d226b53436c74b203dfffeb9288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727207199935527912,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7520fc22-94af-4575-8df7-4476677d10
93,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d,PodSandboxId:587d6ccb1745ba76a332c8923a5f87bcd11edaf98522071c002a12e9bec09de3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727207195192787863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 174e1e03afada0c498597533826f4a8a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4,PodSandboxId:9944102a7ac497b62527c2f8c7f70f28a7165a70643d303dad4b8f2ef97c9ac5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727207195154747215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 763974fab4516ef5e6d93eab84e76559,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8,PodSandboxId:b7ea7f8497be3fdd67e55750cba82b260f2a74f128bce9087ebb337c97f1b1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727207195117744694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef486e4e6c075645e3229eeb7938b7a9,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca,PodSandboxId:885bb9e653bf5d3866fbe9ed02ecf7b68d086a88962702ea1615800d1b4cce2c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207195082144947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9dc2cb98884b703fb63a65b935124d,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37dc0826-2995-4f29-bdee-47f01942edac name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.214769698Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3ade0e8-523f-4b76-8d13-371c9f439ef1 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.214870464Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3ade0e8-523f-4b76-8d13-371c9f439ef1 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.216149185Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c1a9c8e-9fd1-4423-87c5-f2647354d4dc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.216560727Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208434216535604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c1a9c8e-9fd1-4423-87c5-f2647354d4dc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.217134330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c9f2f24-1bda-458d-9b77-e114aa594518 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.217204661Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c9f2f24-1bda-458d-9b77-e114aa594518 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:14 no-preload-965745 crio[700]: time="2024-09-24 20:07:14.217446111Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d,PodSandboxId:376a3d2bc97fcc7fcd33727dcd123e61f6a732a1af2219b47b79997fc921f4e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207230692066103,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25f7a78-bc14-4613-aed5-ab00c8d39366,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51b423ff2358424fcd82804468580c6cc31f8bfde91d6513620d6c70d5270d7,PodSandboxId:290e5f0c006dac437c4a83e12c9615229b0e446b1f26c6df5577c01fdb69e5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727207209776928197,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 97f162b3-8eb4-4b04-af2b-978373632a7a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80,PodSandboxId:5ea281ea1825d20bde0a5d0fdba0488c82f3ecca30d953d478f3395c10f8e764,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207207600063381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qb2mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d38dedd6-6361-419c-891d-e5a5189776db,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba,PodSandboxId:376a3d2bc97fcc7fcd33727dcd123e61f6a732a1af2219b47b79997fc921f4e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727207200088517256,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
25f7a78-bc14-4613-aed5-ab00c8d39366,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8,PodSandboxId:2f178420dd0670c39ae0ed95268f85e8d3314d226b53436c74b203dfffeb9288,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727207199935527912,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7520fc22-94af-4575-8df7-4476677d10
93,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d,PodSandboxId:587d6ccb1745ba76a332c8923a5f87bcd11edaf98522071c002a12e9bec09de3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727207195192787863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 174e1e03afada0c498597533826f4a8a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4,PodSandboxId:9944102a7ac497b62527c2f8c7f70f28a7165a70643d303dad4b8f2ef97c9ac5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727207195154747215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 763974fab4516ef5e6d93eab84e76559,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8,PodSandboxId:b7ea7f8497be3fdd67e55750cba82b260f2a74f128bce9087ebb337c97f1b1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727207195117744694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef486e4e6c075645e3229eeb7938b7a9,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca,PodSandboxId:885bb9e653bf5d3866fbe9ed02ecf7b68d086a88962702ea1615800d1b4cce2c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207195082144947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-965745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9dc2cb98884b703fb63a65b935124d,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c9f2f24-1bda-458d-9b77-e114aa594518 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	50a3e972e70a2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       3                   376a3d2bc97fc       storage-provisioner
	f51b423ff2358       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   290e5f0c006da       busybox
	5701cbef602b0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      20 minutes ago      Running             coredns                   1                   5ea281ea1825d       coredns-7c65d6cfc9-qb2mm
	daabc8f3d80f5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       2                   376a3d2bc97fc       storage-provisioner
	35d91507f646a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      20 minutes ago      Running             kube-proxy                1                   2f178420dd067       kube-proxy-ng8vf
	68e60ea512c88       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      20 minutes ago      Running             kube-scheduler            1                   587d6ccb1745b       kube-scheduler-no-preload-965745
	b09b340cd637a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   9944102a7ac49       etcd-no-preload-965745
	b6f32e0b22cfb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      20 minutes ago      Running             kube-controller-manager   1                   b7ea7f8497be3       kube-controller-manager-no-preload-965745
	8c6b0840dab2d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      20 minutes ago      Running             kube-apiserver            1                   885bb9e653bf5       kube-apiserver-no-preload-965745
	
	
	==> coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:39134 - 32408 "HINFO IN 5780760760276393963.1388614174394367891. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.101909183s
	
	
	==> describe nodes <==
	Name:               no-preload-965745
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-965745
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=no-preload-965745
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T19_38_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 19:38:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-965745
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 20:07:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 20:02:27 +0000   Tue, 24 Sep 2024 19:38:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 20:02:27 +0000   Tue, 24 Sep 2024 19:38:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 20:02:27 +0000   Tue, 24 Sep 2024 19:38:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 20:02:27 +0000   Tue, 24 Sep 2024 19:46:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    no-preload-965745
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0a7579b73f843d89d32d738c989e404
	  System UUID:                f0a7579b-73f8-43d8-9d32-d738c989e404
	  Boot ID:                    24d70444-16d0-434e-aeb5-3b94273e684f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-qb2mm                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-no-preload-965745                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-no-preload-965745             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-no-preload-965745    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-ng8vf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-no-preload-965745             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-w7bfj              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node no-preload-965745 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node no-preload-965745 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node no-preload-965745 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node no-preload-965745 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node no-preload-965745 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node no-preload-965745 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node no-preload-965745 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node no-preload-965745 event: Registered Node no-preload-965745 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node no-preload-965745 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node no-preload-965745 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node no-preload-965745 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node no-preload-965745 event: Registered Node no-preload-965745 in Controller
	
	
	==> dmesg <==
	[Sep24 19:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.046885] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.035754] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.670442] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.794001] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.534674] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.362037] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.054257] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063605] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.166455] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.149953] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.277343] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[ +14.614382] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[  +0.059924] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.482839] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device
	[  +5.073446] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.934652] systemd-fstab-generator[1972]: Ignoring "noauto" option for root device
	[  +3.276580] kauditd_printk_skb: 61 callbacks suppressed
	[Sep24 19:47] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] <==
	{"level":"warn","ts":"2024-09-24T19:46:44.385751Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.5317ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/metrics-server:system:auth-delegator\" ","response":"range_response_count:1 size:1219"}
	{"level":"info","ts":"2024-09-24T19:46:44.386323Z","caller":"traceutil/trace.go:171","msg":"trace[627618841] range","detail":"{range_begin:/registry/clusterrolebindings/metrics-server:system:auth-delegator; range_end:; response_count:1; response_revision:591; }","duration":"264.109797ms","start":"2024-09-24T19:46:44.122203Z","end":"2024-09-24T19:46:44.386313Z","steps":["trace[627618841] 'agreement among raft nodes before linearized reading'  (duration: 263.440071ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T19:46:44.386012Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"268.144168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:1 size:1210"}
	{"level":"info","ts":"2024-09-24T19:46:44.386517Z","caller":"traceutil/trace.go:171","msg":"trace[1243052314] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:1; response_revision:591; }","duration":"268.652269ms","start":"2024-09-24T19:46:44.117857Z","end":"2024-09-24T19:46:44.386510Z","steps":["trace[1243052314] 'agreement among raft nodes before linearized reading'  (duration: 268.115567ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T19:46:44.850996Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"344.769341ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:metrics-server\" ","response":"range_response_count:1 size:1174"}
	{"level":"warn","ts":"2024-09-24T19:46:44.851014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.082881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:3958"}
	{"level":"info","ts":"2024-09-24T19:46:44.851046Z","caller":"traceutil/trace.go:171","msg":"trace[298194409] range","detail":"{range_begin:/registry/clusterrolebindings/system:metrics-server; range_end:; response_count:1; response_revision:592; }","duration":"344.83205ms","start":"2024-09-24T19:46:44.506202Z","end":"2024-09-24T19:46:44.851034Z","steps":["trace[298194409] 'range keys from in-memory index tree'  (duration: 344.695051ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T19:46:44.851056Z","caller":"traceutil/trace.go:171","msg":"trace[365039085] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:592; }","duration":"273.135155ms","start":"2024-09-24T19:46:44.577910Z","end":"2024-09-24T19:46:44.851045Z","steps":["trace[365039085] 'range keys from in-memory index tree'  (duration: 272.961743ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T19:46:44.851072Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T19:46:44.506167Z","time spent":"344.900301ms","remote":"127.0.0.1:40702","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":1197,"request content":"key:\"/registry/clusterrolebindings/system:metrics-server\" "}
	{"level":"warn","ts":"2024-09-24T19:46:44.851409Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"335.540946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-ng8vf\" ","response":"range_response_count:1 size:4936"}
	{"level":"info","ts":"2024-09-24T19:46:44.851457Z","caller":"traceutil/trace.go:171","msg":"trace[343055841] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-ng8vf; range_end:; response_count:1; response_revision:592; }","duration":"335.5731ms","start":"2024-09-24T19:46:44.515858Z","end":"2024-09-24T19:46:44.851431Z","steps":["trace[343055841] 'range keys from in-memory index tree'  (duration: 335.277408ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T19:46:44.851482Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T19:46:44.515822Z","time spent":"335.653445ms","remote":"127.0.0.1:40488","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":4959,"request content":"key:\"/registry/pods/kube-system/kube-proxy-ng8vf\" "}
	{"level":"info","ts":"2024-09-24T19:46:45.001024Z","caller":"traceutil/trace.go:171","msg":"trace[2047536342] linearizableReadLoop","detail":"{readStateIndex:633; appliedIndex:632; }","duration":"122.356278ms","start":"2024-09-24T19:46:44.878648Z","end":"2024-09-24T19:46:45.001005Z","steps":["trace[2047536342] 'read index received'  (duration: 122.139897ms)","trace[2047536342] 'applied index is now lower than readState.Index'  (duration: 215.574µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T19:46:45.001128Z","caller":"traceutil/trace.go:171","msg":"trace[1791779633] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"134.120192ms","start":"2024-09-24T19:46:44.867000Z","end":"2024-09-24T19:46:45.001120Z","steps":["trace[1791779633] 'process raft request'  (duration: 133.856743ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T19:46:45.001296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.630323ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/metrics-server\" ","response":"range_response_count:1 size:1654"}
	{"level":"info","ts":"2024-09-24T19:46:45.001327Z","caller":"traceutil/trace.go:171","msg":"trace[1451915071] range","detail":"{range_begin:/registry/services/specs/kube-system/metrics-server; range_end:; response_count:1; response_revision:593; }","duration":"122.67448ms","start":"2024-09-24T19:46:44.878645Z","end":"2024-09-24T19:46:45.001319Z","steps":["trace[1451915071] 'agreement among raft nodes before linearized reading'  (duration: 122.599979ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T19:56:36.851839Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":874}
	{"level":"info","ts":"2024-09-24T19:56:36.860453Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":874,"took":"8.37578ms","hash":2438960273,"current-db-size-bytes":2711552,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2711552,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-09-24T19:56:36.860499Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2438960273,"revision":874,"compact-revision":-1}
	{"level":"info","ts":"2024-09-24T20:01:36.858287Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1117}
	{"level":"info","ts":"2024-09-24T20:01:36.861592Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1117,"took":"2.910754ms","hash":3493830106,"current-db-size-bytes":2711552,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1605632,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-24T20:01:36.861667Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3493830106,"revision":1117,"compact-revision":874}
	{"level":"info","ts":"2024-09-24T20:06:36.866491Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1360}
	{"level":"info","ts":"2024-09-24T20:06:36.869855Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1360,"took":"2.897485ms","hash":594594942,"current-db-size-bytes":2711552,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-24T20:06:36.869955Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":594594942,"revision":1360,"compact-revision":1117}
	
	
	==> kernel <==
	 20:07:14 up 21 min,  0 users,  load average: 0.20, 0.16, 0.11
	Linux no-preload-965745 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] <==
	I0924 20:02:39.614619       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 20:02:39.615712       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 20:04:39.615440       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 20:04:39.615501       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0924 20:04:39.616866       1 handler_proxy.go:99] no RequestInfo found in the context
	I0924 20:04:39.616875       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0924 20:04:39.616962       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 20:04:39.618230       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 20:06:38.616592       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 20:06:38.616680       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0924 20:06:39.618350       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 20:06:39.618475       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0924 20:06:39.618560       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 20:06:39.618634       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0924 20:06:39.619645       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 20:06:39.619746       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] <==
	E0924 20:02:12.265784       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:02:12.852170       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 20:02:27.274645       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-965745"
	E0924 20:02:42.272292       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:02:42.859211       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 20:02:49.531080       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="147.312µs"
	I0924 20:03:01.526474       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="44.946µs"
	E0924 20:03:12.278172       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:03:12.865465       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:03:42.283555       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:03:42.872047       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:04:12.288515       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:04:12.878572       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:04:42.294142       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:04:42.885243       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:05:12.299280       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:05:12.892295       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:05:42.307512       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:05:42.899166       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:06:12.312658       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:06:12.905458       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:06:42.318870       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:06:42.914171       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:07:12.324059       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:07:12.921186       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 19:46:40.246129       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 19:46:40.255229       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.134"]
	E0924 19:46:40.255448       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 19:46:40.296558       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 19:46:40.296596       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 19:46:40.296654       1 server_linux.go:169] "Using iptables Proxier"
	I0924 19:46:40.300320       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 19:46:40.300798       1 server.go:483] "Version info" version="v1.31.1"
	I0924 19:46:40.300832       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:46:40.302504       1 config.go:199] "Starting service config controller"
	I0924 19:46:40.302549       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 19:46:40.302581       1 config.go:105] "Starting endpoint slice config controller"
	I0924 19:46:40.302601       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 19:46:40.303309       1 config.go:328] "Starting node config controller"
	I0924 19:46:40.303338       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 19:46:40.403310       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 19:46:40.403466       1 shared_informer.go:320] Caches are synced for service config
	I0924 19:46:40.403478       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] <==
	I0924 19:46:36.231353       1 serving.go:386] Generated self-signed cert in-memory
	W0924 19:46:38.572845       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 19:46:38.572954       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 19:46:38.572984       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 19:46:38.573053       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 19:46:38.631436       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0924 19:46:38.633452       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:46:38.639145       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0924 19:46:38.641558       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0924 19:46:38.643073       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 19:46:38.641631       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0924 19:46:38.743853       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 20:06:03 no-preload-965745 kubelet[1354]: E0924 20:06:03.512869    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w7bfj" podUID="52962ba3-838e-4cb9-9349-ca3760633a12"
	Sep 24 20:06:04 no-preload-965745 kubelet[1354]: E0924 20:06:04.794416    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208364794155040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:04 no-preload-965745 kubelet[1354]: E0924 20:06:04.794451    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208364794155040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:14 no-preload-965745 kubelet[1354]: E0924 20:06:14.796020    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208374795673641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:14 no-preload-965745 kubelet[1354]: E0924 20:06:14.796064    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208374795673641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:16 no-preload-965745 kubelet[1354]: E0924 20:06:16.513578    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w7bfj" podUID="52962ba3-838e-4cb9-9349-ca3760633a12"
	Sep 24 20:06:24 no-preload-965745 kubelet[1354]: E0924 20:06:24.797854    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208384797584136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:24 no-preload-965745 kubelet[1354]: E0924 20:06:24.797881    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208384797584136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:28 no-preload-965745 kubelet[1354]: E0924 20:06:28.512715    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w7bfj" podUID="52962ba3-838e-4cb9-9349-ca3760633a12"
	Sep 24 20:06:34 no-preload-965745 kubelet[1354]: E0924 20:06:34.531965    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 20:06:34 no-preload-965745 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 20:06:34 no-preload-965745 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 20:06:34 no-preload-965745 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 20:06:34 no-preload-965745 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 20:06:34 no-preload-965745 kubelet[1354]: E0924 20:06:34.800552    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208394799944553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:34 no-preload-965745 kubelet[1354]: E0924 20:06:34.800666    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208394799944553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:42 no-preload-965745 kubelet[1354]: E0924 20:06:42.512199    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w7bfj" podUID="52962ba3-838e-4cb9-9349-ca3760633a12"
	Sep 24 20:06:44 no-preload-965745 kubelet[1354]: E0924 20:06:44.803010    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208404802587958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:44 no-preload-965745 kubelet[1354]: E0924 20:06:44.803316    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208404802587958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:54 no-preload-965745 kubelet[1354]: E0924 20:06:54.804967    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208414804562109,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:54 no-preload-965745 kubelet[1354]: E0924 20:06:54.805003    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208414804562109,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:57 no-preload-965745 kubelet[1354]: E0924 20:06:57.512562    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w7bfj" podUID="52962ba3-838e-4cb9-9349-ca3760633a12"
	Sep 24 20:07:04 no-preload-965745 kubelet[1354]: E0924 20:07:04.806527    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208424806176609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:04 no-preload-965745 kubelet[1354]: E0924 20:07:04.806933    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208424806176609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:09 no-preload-965745 kubelet[1354]: E0924 20:07:09.513125    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w7bfj" podUID="52962ba3-838e-4cb9-9349-ca3760633a12"
	
	
	==> storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] <==
	I0924 19:47:10.773086       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 19:47:10.783341       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 19:47:10.783486       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 19:47:28.184285       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 19:47:28.184581       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-965745_0ab79c9f-5372-4405-9a64-c1efac65c62f!
	I0924 19:47:28.184908       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8166aaa3-4b4c-449e-a89c-dbccda9e331c", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-965745_0ab79c9f-5372-4405-9a64-c1efac65c62f became leader
	I0924 19:47:28.287779       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-965745_0ab79c9f-5372-4405-9a64-c1efac65c62f!
	
	
	==> storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] <==
	I0924 19:46:40.200996       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0924 19:47:10.203840       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-965745 -n no-preload-965745
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-965745 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-w7bfj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-965745 describe pod metrics-server-6867b74b74-w7bfj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-965745 describe pod metrics-server-6867b74b74-w7bfj: exit status 1 (67.501173ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-w7bfj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-965745 describe pod metrics-server-6867b74b74-w7bfj: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (425.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (428.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-093771 -n default-k8s-diff-port-093771
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-24 20:08:02.161158092 +0000 UTC m=+6491.263169799
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-093771 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-093771 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.583µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-093771 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-093771 -n default-k8s-diff-port-093771
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-093771 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-093771 logs -n 25: (3.085108187s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:39 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-311319            | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-965745             | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-093771  | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-510301        | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-311319                 | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-965745                  | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-093771       | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:51 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-510301             | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 20:06 UTC | 24 Sep 24 20:06 UTC |
	| start   | -p newest-cni-813973 --memory=2200 --alsologtostderr   | newest-cni-813973            | jenkins | v1.34.0 | 24 Sep 24 20:06 UTC | 24 Sep 24 20:07 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 20:07 UTC | 24 Sep 24 20:07 UTC |
	| delete  | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 20:07 UTC | 24 Sep 24 20:07 UTC |
	| addons  | enable metrics-server -p newest-cni-813973             | newest-cni-813973            | jenkins | v1.34.0 | 24 Sep 24 20:07 UTC | 24 Sep 24 20:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-813973                                   | newest-cni-813973            | jenkins | v1.34.0 | 24 Sep 24 20:07 UTC | 24 Sep 24 20:07 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-813973                  | newest-cni-813973            | jenkins | v1.34.0 | 24 Sep 24 20:07 UTC | 24 Sep 24 20:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-813973 --memory=2200 --alsologtostderr   | newest-cni-813973            | jenkins | v1.34.0 | 24 Sep 24 20:07 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 20:07:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 20:07:36.583885   77377 out.go:345] Setting OutFile to fd 1 ...
	I0924 20:07:36.584131   77377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 20:07:36.584140   77377 out.go:358] Setting ErrFile to fd 2...
	I0924 20:07:36.584144   77377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 20:07:36.584339   77377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 20:07:36.584838   77377 out.go:352] Setting JSON to false
	I0924 20:07:36.585712   77377 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6608,"bootTime":1727201849,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 20:07:36.585810   77377 start.go:139] virtualization: kvm guest
	I0924 20:07:36.588218   77377 out.go:177] * [newest-cni-813973] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 20:07:36.589804   77377 notify.go:220] Checking for updates...
	I0924 20:07:36.589879   77377 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 20:07:36.591317   77377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 20:07:36.592981   77377 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 20:07:36.594663   77377 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 20:07:36.596087   77377 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 20:07:36.597648   77377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 20:07:36.599464   77377 config.go:182] Loaded profile config "newest-cni-813973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 20:07:36.599847   77377 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 20:07:36.599918   77377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 20:07:36.614784   77377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45121
	I0924 20:07:36.615237   77377 main.go:141] libmachine: () Calling .GetVersion
	I0924 20:07:36.615761   77377 main.go:141] libmachine: Using API Version  1
	I0924 20:07:36.615782   77377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 20:07:36.616164   77377 main.go:141] libmachine: () Calling .GetMachineName
	I0924 20:07:36.616398   77377 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:36.616647   77377 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 20:07:36.616951   77377 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 20:07:36.616983   77377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 20:07:36.631739   77377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33595
	I0924 20:07:36.632148   77377 main.go:141] libmachine: () Calling .GetVersion
	I0924 20:07:36.632587   77377 main.go:141] libmachine: Using API Version  1
	I0924 20:07:36.632611   77377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 20:07:36.632965   77377 main.go:141] libmachine: () Calling .GetMachineName
	I0924 20:07:36.633166   77377 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:36.669157   77377 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 20:07:36.670400   77377 start.go:297] selected driver: kvm2
	I0924 20:07:36.670415   77377 start.go:901] validating driver "kvm2" against &{Name:newest-cni-813973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-813973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.187 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 20:07:36.670539   77377 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 20:07:36.671355   77377 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 20:07:36.671439   77377 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 20:07:36.686352   77377 install.go:137] /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 20:07:36.686768   77377 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0924 20:07:36.686804   77377 cni.go:84] Creating CNI manager for ""
	I0924 20:07:36.686876   77377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 20:07:36.686933   77377 start.go:340] cluster config:
	{Name:newest-cni-813973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-813973 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.187 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 20:07:36.687029   77377 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 20:07:36.689588   77377 out.go:177] * Starting "newest-cni-813973" primary control-plane node in "newest-cni-813973" cluster
	I0924 20:07:36.690786   77377 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 20:07:36.690821   77377 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 20:07:36.690845   77377 cache.go:56] Caching tarball of preloaded images
	I0924 20:07:36.690927   77377 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 20:07:36.690941   77377 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 20:07:36.691054   77377 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/config.json ...
	I0924 20:07:36.691220   77377 start.go:360] acquireMachinesLock for newest-cni-813973: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 20:07:36.691257   77377 start.go:364] duration metric: took 21.455µs to acquireMachinesLock for "newest-cni-813973"
	I0924 20:07:36.691269   77377 start.go:96] Skipping create...Using existing machine configuration
	I0924 20:07:36.691276   77377 fix.go:54] fixHost starting: 
	I0924 20:07:36.691507   77377 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 20:07:36.691535   77377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 20:07:36.705768   77377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46091
	I0924 20:07:36.706204   77377 main.go:141] libmachine: () Calling .GetVersion
	I0924 20:07:36.706654   77377 main.go:141] libmachine: Using API Version  1
	I0924 20:07:36.706675   77377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 20:07:36.707034   77377 main.go:141] libmachine: () Calling .GetMachineName
	I0924 20:07:36.707213   77377 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:36.707383   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetState
	I0924 20:07:36.708852   77377 fix.go:112] recreateIfNeeded on newest-cni-813973: state=Stopped err=<nil>
	I0924 20:07:36.708871   77377 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	W0924 20:07:36.709031   77377 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 20:07:36.711120   77377 out.go:177] * Restarting existing kvm2 VM for "newest-cni-813973" ...
	I0924 20:07:36.712383   77377 main.go:141] libmachine: (newest-cni-813973) Calling .Start
	I0924 20:07:36.712528   77377 main.go:141] libmachine: (newest-cni-813973) Ensuring networks are active...
	I0924 20:07:36.713207   77377 main.go:141] libmachine: (newest-cni-813973) Ensuring network default is active
	I0924 20:07:36.713669   77377 main.go:141] libmachine: (newest-cni-813973) Ensuring network mk-newest-cni-813973 is active
	I0924 20:07:36.714042   77377 main.go:141] libmachine: (newest-cni-813973) Getting domain xml...
	I0924 20:07:36.714761   77377 main.go:141] libmachine: (newest-cni-813973) Creating domain...
	I0924 20:07:37.925456   77377 main.go:141] libmachine: (newest-cni-813973) Waiting to get IP...
	I0924 20:07:37.926629   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:37.927062   77377 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:07:37.927129   77377 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:07:37.927042   77412 retry.go:31] will retry after 288.442272ms: waiting for machine to come up
	I0924 20:07:38.217696   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:38.218236   77377 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:07:38.218268   77377 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:07:38.218193   77412 retry.go:31] will retry after 301.230057ms: waiting for machine to come up
	I0924 20:07:38.520722   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:38.521165   77377 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:07:38.521195   77377 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:07:38.521127   77412 retry.go:31] will retry after 458.823259ms: waiting for machine to come up
	I0924 20:07:38.981953   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:38.982394   77377 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:07:38.982422   77377 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:07:38.982357   77412 retry.go:31] will retry after 448.674013ms: waiting for machine to come up
	I0924 20:07:39.433023   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:39.433398   77377 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:07:39.433428   77377 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:07:39.433356   77412 retry.go:31] will retry after 478.488538ms: waiting for machine to come up
	I0924 20:07:39.912905   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:39.913315   77377 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:07:39.913340   77377 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:07:39.913268   77412 retry.go:31] will retry after 642.067197ms: waiting for machine to come up
	I0924 20:07:40.557127   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:40.557632   77377 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:07:40.557658   77377 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:07:40.557587   77412 retry.go:31] will retry after 808.006395ms: waiting for machine to come up
	I0924 20:07:41.367233   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:41.367641   77377 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:07:41.367669   77377 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:07:41.367598   77412 retry.go:31] will retry after 1.435485013s: waiting for machine to come up
	I0924 20:07:42.804657   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:42.805086   77377 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:07:42.805110   77377 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:07:42.805060   77412 retry.go:31] will retry after 1.414169135s: waiting for machine to come up
	I0924 20:07:44.221668   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:44.222114   77377 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:07:44.222142   77377 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:07:44.222066   77412 retry.go:31] will retry after 1.923383965s: waiting for machine to come up
	I0924 20:07:46.148062   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:46.148520   77377 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:07:46.148543   77377 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:07:46.148470   77412 retry.go:31] will retry after 2.063576469s: waiting for machine to come up
	I0924 20:07:48.213944   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:48.214307   77377 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:07:48.214336   77377 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:07:48.214293   77412 retry.go:31] will retry after 2.723508137s: waiting for machine to come up
	I0924 20:07:50.939764   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:50.940303   77377 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:07:50.940325   77377 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:07:50.940249   77412 retry.go:31] will retry after 3.796337537s: waiting for machine to come up
	I0924 20:07:54.740472   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:54.740910   77377 main.go:141] libmachine: (newest-cni-813973) Found IP for machine: 192.168.72.187
	I0924 20:07:54.740933   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has current primary IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:54.740941   77377 main.go:141] libmachine: (newest-cni-813973) Reserving static IP address...
	I0924 20:07:54.741333   77377 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "newest-cni-813973", mac: "52:54:00:ae:f7:44", ip: "192.168.72.187"} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:07:46 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:54.741358   77377 main.go:141] libmachine: (newest-cni-813973) Reserved static IP address: 192.168.72.187
	I0924 20:07:54.741377   77377 main.go:141] libmachine: (newest-cni-813973) DBG | skip adding static IP to network mk-newest-cni-813973 - found existing host DHCP lease matching {name: "newest-cni-813973", mac: "52:54:00:ae:f7:44", ip: "192.168.72.187"}
	I0924 20:07:54.741395   77377 main.go:141] libmachine: (newest-cni-813973) DBG | Getting to WaitForSSH function...
	I0924 20:07:54.741410   77377 main.go:141] libmachine: (newest-cni-813973) Waiting for SSH to be available...
	I0924 20:07:54.743650   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:54.743911   77377 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:07:46 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:54.743937   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:54.744001   77377 main.go:141] libmachine: (newest-cni-813973) DBG | Using SSH client type: external
	I0924 20:07:54.744030   77377 main.go:141] libmachine: (newest-cni-813973) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa (-rw-------)
	I0924 20:07:54.744064   77377 main.go:141] libmachine: (newest-cni-813973) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 20:07:54.744081   77377 main.go:141] libmachine: (newest-cni-813973) DBG | About to run SSH command:
	I0924 20:07:54.744092   77377 main.go:141] libmachine: (newest-cni-813973) DBG | exit 0
	I0924 20:07:54.862258   77377 main.go:141] libmachine: (newest-cni-813973) DBG | SSH cmd err, output: <nil>: 
	I0924 20:07:54.862566   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetConfigRaw
	I0924 20:07:54.863194   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetIP
	I0924 20:07:54.865371   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:54.865835   77377 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:07:46 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:54.865870   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:54.866114   77377 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/config.json ...
	I0924 20:07:54.866285   77377 machine.go:93] provisionDockerMachine start ...
	I0924 20:07:54.866301   77377 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:54.866484   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:54.868590   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:54.868927   77377 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:07:46 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:54.868954   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:54.869089   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:54.869249   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:54.869402   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:54.869512   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:54.869671   77377 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:54.869861   77377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:54.869876   77377 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 20:07:54.966347   77377 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 20:07:54.966374   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetMachineName
	I0924 20:07:54.966599   77377 buildroot.go:166] provisioning hostname "newest-cni-813973"
	I0924 20:07:54.966635   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetMachineName
	I0924 20:07:54.966859   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:54.969443   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:54.969934   77377 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:07:46 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:54.969988   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:54.970033   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:54.970184   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:54.970352   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:54.970471   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:54.970634   77377 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:54.970859   77377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:54.970877   77377 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-813973 && echo "newest-cni-813973" | sudo tee /etc/hostname
	I0924 20:07:55.079581   77377 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-813973
	
	I0924 20:07:55.079611   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:55.082950   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.083454   77377 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:07:46 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:55.083484   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.083616   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:55.083787   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:55.083972   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:55.084161   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:55.084405   77377 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:55.084566   77377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:55.084582   77377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-813973' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-813973/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-813973' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 20:07:55.186805   77377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 20:07:55.186854   77377 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 20:07:55.186877   77377 buildroot.go:174] setting up certificates
	I0924 20:07:55.186888   77377 provision.go:84] configureAuth start
	I0924 20:07:55.186900   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetMachineName
	I0924 20:07:55.187176   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetIP
	I0924 20:07:55.189609   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.190001   77377 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:07:46 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:55.190031   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.190157   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:55.191976   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.192261   77377 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:07:46 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:55.192288   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.192348   77377 provision.go:143] copyHostCerts
	I0924 20:07:55.192396   77377 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 20:07:55.192406   77377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 20:07:55.192470   77377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 20:07:55.192584   77377 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 20:07:55.192593   77377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 20:07:55.192622   77377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 20:07:55.192685   77377 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 20:07:55.192692   77377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 20:07:55.192713   77377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 20:07:55.192771   77377 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.newest-cni-813973 san=[127.0.0.1 192.168.72.187 localhost minikube newest-cni-813973]
	I0924 20:07:55.340401   77377 provision.go:177] copyRemoteCerts
	I0924 20:07:55.340467   77377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 20:07:55.340491   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:55.343322   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.343719   77377 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:07:46 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:55.343746   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.343924   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:55.344132   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:55.344266   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:55.344392   77377 sshutil.go:53] new ssh client: &{IP:192.168.72.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa Username:docker}
	I0924 20:07:55.424020   77377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 20:07:55.448551   77377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 20:07:55.469755   77377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 20:07:55.490699   77377 provision.go:87] duration metric: took 303.798232ms to configureAuth
	I0924 20:07:55.490725   77377 buildroot.go:189] setting minikube options for container-runtime
	I0924 20:07:55.491013   77377 config.go:182] Loaded profile config "newest-cni-813973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 20:07:55.491150   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:55.493985   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.494324   77377 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:07:46 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:55.494355   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.494566   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:55.494770   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:55.494969   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:55.495124   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:55.495261   77377 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:55.495406   77377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:55.495421   77377 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 20:07:55.694477   77377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 20:07:55.694498   77377 machine.go:96] duration metric: took 828.202007ms to provisionDockerMachine
	I0924 20:07:55.694519   77377 start.go:293] postStartSetup for "newest-cni-813973" (driver="kvm2")
	I0924 20:07:55.694531   77377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 20:07:55.694553   77377 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:55.694960   77377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 20:07:55.694990   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:55.697423   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.697687   77377 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:07:46 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:55.697717   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.697860   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:55.698054   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:55.698176   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:55.698282   77377 sshutil.go:53] new ssh client: &{IP:192.168.72.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa Username:docker}
	I0924 20:07:55.776301   77377 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 20:07:55.780090   77377 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 20:07:55.780108   77377 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 20:07:55.780181   77377 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 20:07:55.780285   77377 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 20:07:55.780395   77377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 20:07:55.788970   77377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 20:07:55.810277   77377 start.go:296] duration metric: took 115.746258ms for postStartSetup
	I0924 20:07:55.810310   77377 fix.go:56] duration metric: took 19.119033878s for fixHost
	I0924 20:07:55.810330   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:55.812795   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.813158   77377 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:07:46 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:55.813186   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.813352   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:55.813552   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:55.813700   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:55.813816   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:55.813960   77377 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:55.814139   77377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:55.814151   77377 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 20:07:55.911185   77377 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727208475.870130715
	
	I0924 20:07:55.911209   77377 fix.go:216] guest clock: 1727208475.870130715
	I0924 20:07:55.911220   77377 fix.go:229] Guest: 2024-09-24 20:07:55.870130715 +0000 UTC Remote: 2024-09-24 20:07:55.810313821 +0000 UTC m=+19.261289661 (delta=59.816894ms)
	I0924 20:07:55.911248   77377 fix.go:200] guest clock delta is within tolerance: 59.816894ms
	I0924 20:07:55.911256   77377 start.go:83] releasing machines lock for "newest-cni-813973", held for 19.219989678s
	I0924 20:07:55.911284   77377 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:55.911554   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetIP
	I0924 20:07:55.913925   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.914248   77377 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:07:46 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:55.914279   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.914432   77377 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:55.914960   77377 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:55.915121   77377 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:55.915201   77377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 20:07:55.915254   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:55.915372   77377 ssh_runner.go:195] Run: cat /version.json
	I0924 20:07:55.915396   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:55.917715   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.917776   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.918083   77377 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:07:46 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:55.918108   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.918134   77377 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:07:46 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:55.918155   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:55.918238   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:55.918348   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:55.918405   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:55.918511   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:55.918579   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:55.918676   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:55.918747   77377 sshutil.go:53] new ssh client: &{IP:192.168.72.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa Username:docker}
	I0924 20:07:55.918841   77377 sshutil.go:53] new ssh client: &{IP:192.168.72.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa Username:docker}
	I0924 20:07:56.009799   77377 ssh_runner.go:195] Run: systemctl --version
	I0924 20:07:56.015215   77377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 20:07:56.158313   77377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 20:07:56.164321   77377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 20:07:56.164381   77377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 20:07:56.178769   77377 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 20:07:56.178796   77377 start.go:495] detecting cgroup driver to use...
	I0924 20:07:56.178901   77377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 20:07:56.194333   77377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 20:07:56.207311   77377 docker.go:217] disabling cri-docker service (if available) ...
	I0924 20:07:56.207366   77377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 20:07:56.222479   77377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 20:07:56.235494   77377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 20:07:56.350862   77377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 20:07:56.509195   77377 docker.go:233] disabling docker service ...
	I0924 20:07:56.509265   77377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 20:07:56.522695   77377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 20:07:56.535904   77377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 20:07:56.653464   77377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 20:07:56.774936   77377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 20:07:56.789196   77377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 20:07:56.807324   77377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 20:07:56.807382   77377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:56.816483   77377 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 20:07:56.816545   77377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:56.825727   77377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:56.834678   77377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:56.843868   77377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 20:07:56.852791   77377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:56.861881   77377 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:56.876939   77377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:56.885883   77377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 20:07:56.894360   77377 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 20:07:56.894401   77377 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 20:07:56.906466   77377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 20:07:56.915145   77377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 20:07:57.031521   77377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 20:07:57.114520   77377 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 20:07:57.114586   77377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 20:07:57.119637   77377 start.go:563] Will wait 60s for crictl version
	I0924 20:07:57.119693   77377 ssh_runner.go:195] Run: which crictl
	I0924 20:07:57.122936   77377 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 20:07:57.153834   77377 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 20:07:57.153908   77377 ssh_runner.go:195] Run: crio --version
	I0924 20:07:57.178928   77377 ssh_runner.go:195] Run: crio --version
	I0924 20:07:57.205290   77377 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 20:07:57.206912   77377 main.go:141] libmachine: (newest-cni-813973) Calling .GetIP
	I0924 20:07:57.209707   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:57.210043   77377 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:07:46 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:57.210075   77377 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:57.210282   77377 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0924 20:07:57.213754   77377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 20:07:57.226663   77377 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0924 20:07:57.227828   77377 kubeadm.go:883] updating cluster {Name:newest-cni-813973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-813973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.187 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 20:07:57.227926   77377 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 20:07:57.227977   77377 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 20:07:57.265056   77377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 20:07:57.265112   77377 ssh_runner.go:195] Run: which lz4
	I0924 20:07:57.268665   77377 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 20:07:57.272336   77377 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 20:07:57.272359   77377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 20:07:58.478784   77377 crio.go:462] duration metric: took 1.21014473s to copy over tarball
	I0924 20:07:58.478866   77377 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 20:08:00.433628   77377 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.954737125s)
	I0924 20:08:00.433657   77377 crio.go:469] duration metric: took 1.954845604s to extract the tarball
	I0924 20:08:00.433666   77377 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 20:08:00.469184   77377 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 20:08:00.508235   77377 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 20:08:00.508260   77377 cache_images.go:84] Images are preloaded, skipping loading
	I0924 20:08:00.508270   77377 kubeadm.go:934] updating node { 192.168.72.187 8443 v1.31.1 crio true true} ...
	I0924 20:08:00.508371   77377 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-813973 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-813973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 20:08:00.508446   77377 ssh_runner.go:195] Run: crio config
	I0924 20:08:00.551960   77377 cni.go:84] Creating CNI manager for ""
	I0924 20:08:00.551987   77377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 20:08:00.551997   77377 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0924 20:08:00.552017   77377 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.187 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-813973 NodeName:newest-cni-813973 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 20:08:00.552148   77377 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-813973"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.187
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.187"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 20:08:00.552203   77377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 20:08:00.561515   77377 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 20:08:00.561567   77377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 20:08:00.570082   77377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0924 20:08:00.585298   77377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 20:08:00.599883   77377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0924 20:08:00.615296   77377 ssh_runner.go:195] Run: grep 192.168.72.187	control-plane.minikube.internal$ /etc/hosts
	I0924 20:08:00.618621   77377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.187	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 20:08:00.629126   77377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 20:08:00.747553   77377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 20:08:00.768933   77377 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973 for IP: 192.168.72.187
	I0924 20:08:00.768957   77377 certs.go:194] generating shared ca certs ...
	I0924 20:08:00.768978   77377 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:08:00.769158   77377 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 20:08:00.769222   77377 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 20:08:00.769236   77377 certs.go:256] generating profile certs ...
	I0924 20:08:00.769354   77377 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/client.key
	I0924 20:08:00.769445   77377 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.key.da78465f
	I0924 20:08:00.769496   77377 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.key
	I0924 20:08:00.769668   77377 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 20:08:00.769722   77377 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 20:08:00.769738   77377 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 20:08:00.769771   77377 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 20:08:00.769802   77377 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 20:08:00.769829   77377 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 20:08:00.769892   77377 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 20:08:00.770708   77377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 20:08:00.807577   77377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 20:08:00.835747   77377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 20:08:00.860197   77377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 20:08:00.901925   77377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 20:08:00.936846   77377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 20:08:00.958522   77377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 20:08:00.979183   77377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 20:08:00.999931   77377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 20:08:01.021175   77377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 20:08:01.041713   77377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 20:08:01.063665   77377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 20:08:01.078696   77377 ssh_runner.go:195] Run: openssl version
	I0924 20:08:01.083758   77377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 20:08:01.093209   77377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 20:08:01.096939   77377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 20:08:01.096979   77377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 20:08:01.102150   77377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 20:08:01.111937   77377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 20:08:01.121609   77377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 20:08:01.125717   77377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 20:08:01.125755   77377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 20:08:01.130882   77377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 20:08:01.140244   77377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 20:08:01.149583   77377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 20:08:01.153578   77377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 20:08:01.153623   77377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 20:08:01.158557   77377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 20:08:01.168368   77377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 20:08:01.172365   77377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 20:08:01.177581   77377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 20:08:01.182694   77377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 20:08:01.187877   77377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 20:08:01.193093   77377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 20:08:01.198048   77377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 20:08:01.203207   77377 kubeadm.go:392] StartCluster: {Name:newest-cni-813973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-813973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.187 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 20:08:01.203303   77377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 20:08:01.203368   77377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 20:08:01.237536   77377 cri.go:89] found id: ""
	I0924 20:08:01.237618   77377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 20:08:01.247300   77377 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 20:08:01.247320   77377 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 20:08:01.247367   77377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 20:08:01.256557   77377 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 20:08:01.257060   77377 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-813973" does not appear in /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 20:08:01.257287   77377 kubeconfig.go:62] /home/jenkins/minikube-integration/19700-3751/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-813973" cluster setting kubeconfig missing "newest-cni-813973" context setting]
	I0924 20:08:01.257688   77377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:08:01.258969   77377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 20:08:01.267448   77377 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.187
	I0924 20:08:01.267474   77377 kubeadm.go:1160] stopping kube-system containers ...
	I0924 20:08:01.267491   77377 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 20:08:01.267539   77377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 20:08:01.299858   77377 cri.go:89] found id: ""
	I0924 20:08:01.299919   77377 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 20:08:01.315009   77377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 20:08:01.323662   77377 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 20:08:01.323677   77377 kubeadm.go:157] found existing configuration files:
	
	I0924 20:08:01.323711   77377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 20:08:01.331693   77377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 20:08:01.331734   77377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 20:08:01.339899   77377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 20:08:01.348308   77377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 20:08:01.348360   77377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 20:08:01.356512   77377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 20:08:01.364564   77377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 20:08:01.364617   77377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 20:08:01.372788   77377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 20:08:01.380639   77377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 20:08:01.380690   77377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 20:08:01.388977   77377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 20:08:01.397278   77377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 20:08:01.493638   77377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	
	
	==> CRI-O <==
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.765980383Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:cf49c730126f643a8a6dd5613a5cca00ca5451e8cb1349320266552693f434fe,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:591605b2-de7e-4dc1-903b-f8102ccc3770,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727207503240302026,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 591605b2-de7e-4dc1-903b-f8102ccc3770,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespac
e\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-24T19:51:42.933020556Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:07cff32feab4f2243f976128b4b7b1bf617d32e00abeb793ec0d9a487c0d8fc1,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-gnlkd,Uid:a3b6c4f7-47e1-48a3-adff-1690db5cea3b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727207503184828639,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-gnlkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b6c4f7-47e1-48a3-adff-1
690db5cea3b,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T19:51:42.878966568Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fac66c2115436eca468dfe2df0b7552e909963fb2629c9b05328fa60c1eb1429,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-nzssp,Uid:ecf276cd-9aa0-4a0b-81b6-da38271d10ed,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727207502034185199,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzssp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecf276cd-9aa0-4a0b-81b6-da38271d10ed,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T19:51:41.726429706Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:189511ecad721347bcdd29ba2d05a1752862cdbb96a72cf5123d00b4f409a06e,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-87t62,Uid:b4be73eb
-defb-4cc1-84f7-d34dccab4a2c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727207502009145330,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-87t62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4be73eb-defb-4cc1-84f7-d34dccab4a2c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T19:51:41.700907274Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5ffc9af5a09ca02df9e225b28402cd2836c732b719c53140d257d07370e00499,Metadata:&PodSandboxMetadata{Name:kube-proxy-5rw7b,Uid:f2916b6c-1a6f-4766-8543-0d846f559710,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727207501920037322,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5rw7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2916b6c-1a6f-4766-8543-0d846f559710,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T19:51:41.613028013Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e3abe44660030f1106549bdad21b9b6c675e95b70013c8621e653c1b2d805397,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-093771,Uid:a17f7297d4f93984fc9ad306bb059326,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727207491224017327,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.116:8444,kubernetes.io/config.hash: a17f7297d4f93984fc9ad306bb059326,kubernetes.io/config.seen: 2024-09-24T19:51:30.752523634Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{I
d:f22a07a10b746d1bb97d6836279f30266bdd6ad8d9fa270d11410225ea015ac3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-093771,Uid:8463196c29ee74ccc6f7e94a4077ef38,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727207491218766006,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8463196c29ee74ccc6f7e94a4077ef38,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8463196c29ee74ccc6f7e94a4077ef38,kubernetes.io/config.seen: 2024-09-24T19:51:30.752525036Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e8cac2baea09058ca0128707233d093926e9c131364d612ce42be4c8ad76189a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-093771,Uid:fe62de2daaf8dcc4fd39e199dadfa7cd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedA
t:1727207491210248449,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62de2daaf8dcc4fd39e199dadfa7cd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fe62de2daaf8dcc4fd39e199dadfa7cd,kubernetes.io/config.seen: 2024-09-24T19:51:30.752526055Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d747699bf5a8abc81e9e969157f2e97051080b1170dd4664427ab5f86497008f,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-093771,Uid:1e1955887103edb8159ea2696a6d8e57,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727207491203768784,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e1955887103edb8159ea2696a6d8e57,tier: control-plane,},Annotat
ions:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.116:2379,kubernetes.io/config.hash: 1e1955887103edb8159ea2696a6d8e57,kubernetes.io/config.seen: 2024-09-24T19:51:30.752497130Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a4917d1a9dab242c5a1b0f0dd14e1cc9750e3564aafc47b20188433d616cb9e8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-093771,Uid:a17f7297d4f93984fc9ad306bb059326,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727207203794367903,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.116:8444,kubernetes.io/config.hash: a17f7297d4f93984fc9ad306bb059326,kubernetes.io/config.s
een: 2024-09-24T19:46:43.307543851Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4aad2173-dc1b-45a5-ba70-f1b480c8bba9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.766558886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=12a65753-2668-4ccf-9104-f48201b25e74 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.766660152Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=12a65753-2668-4ccf-9104-f48201b25e74 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.766867622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b6f65eec9f0c856f644d68a54155ea53d7ba0b3c434a007ea245837106df31d,PodSandboxId:cf49c730126f643a8a6dd5613a5cca00ca5451e8cb1349320266552693f434fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207503369894604,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 591605b2-de7e-4dc1-903b-f8102ccc3770,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05c709fa2730944d3173932bdd1af233ff8b990def81020eb63ee86fe32998d,PodSandboxId:fac66c2115436eca468dfe2df0b7552e909963fb2629c9b05328fa60c1eb1429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207502851008630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzssp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecf276cd-9aa0-4a0b-81b6-da38271d10ed,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb4369fc1e40810036541e157b9cf7ae4c35088c5d16d996c74baff6dc4bfd7,PodSandboxId:189511ecad721347bcdd29ba2d05a1752862cdbb96a72cf5123d00b4f409a06e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207502729510259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-87t62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b4be73eb-defb-4cc1-84f7-d34dccab4a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c77eb695dfea4ab3ef6fc3c580b15f8514469dfb763e71312cd0d8af5220b4,PodSandboxId:5ffc9af5a09ca02df9e225b28402cd2836c732b719c53140d257d07370e00499,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727207502133542728,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rw7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2916b6c-1a6f-4766-8543-0d846f559710,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ab49acc4ac79d8659cb62284f7467547eb4df2913391aed631e3f188dcc002,PodSandboxId:e8cac2baea09058ca0128707233d093926e9c131364d612ce42be4c8ad76189a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172720749146115744
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62de2daaf8dcc4fd39e199dadfa7cd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6ac8738592cf78c03eaa7ce93a5be3ee513801aef2e0e2ac506e5ec35e0faa,PodSandboxId:f22a07a10b746d1bb97d6836279f30266bdd6ad8d9fa270d11410225ea015ac3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17272074914
60897259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8463196c29ee74ccc6f7e94a4077ef38,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1c1d2106c8b28f603f7566861a48a593e9d1ae6c35e0bf44e73e504b1bf94f,PodSandboxId:d747699bf5a8abc81e9e969157f2e97051080b1170dd4664427ab5f86497008f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17272
07491384520046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e1955887103edb8159ea2696a6d8e57,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac621738ad1f0426836abac76c909ee1f89612ef5da06efddefce95137669ac3,PodSandboxId:e3abe44660030f1106549bdad21b9b6c675e95b70013c8621e653c1b2d805397,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207491339991882,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58152b24003559a219bc8b89415a2309d822726f957c2977aedfad7d8aea0c8d,PodSandboxId:a4917d1a9dab242c5a1b0f0dd14e1cc9750e3564aafc47b20188433d616cb9e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727207205036856114,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=12a65753-2668-4ccf-9104-f48201b25e74 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.783526238Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89bffa2f-90f5-472a-940f-fd13fa5832ee name=/runtime.v1.RuntimeService/Version
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.783659153Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89bffa2f-90f5-472a-940f-fd13fa5832ee name=/runtime.v1.RuntimeService/Version
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.784500959Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9aa77412-66dd-4b51-953f-c6b2235c43a7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.784917182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208482784897693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9aa77412-66dd-4b51-953f-c6b2235c43a7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.785431911Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42e2b918-7f98-4027-88fb-d0f8d62b56cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.785479028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42e2b918-7f98-4027-88fb-d0f8d62b56cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.785726963Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b6f65eec9f0c856f644d68a54155ea53d7ba0b3c434a007ea245837106df31d,PodSandboxId:cf49c730126f643a8a6dd5613a5cca00ca5451e8cb1349320266552693f434fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207503369894604,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 591605b2-de7e-4dc1-903b-f8102ccc3770,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05c709fa2730944d3173932bdd1af233ff8b990def81020eb63ee86fe32998d,PodSandboxId:fac66c2115436eca468dfe2df0b7552e909963fb2629c9b05328fa60c1eb1429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207502851008630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzssp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecf276cd-9aa0-4a0b-81b6-da38271d10ed,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb4369fc1e40810036541e157b9cf7ae4c35088c5d16d996c74baff6dc4bfd7,PodSandboxId:189511ecad721347bcdd29ba2d05a1752862cdbb96a72cf5123d00b4f409a06e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207502729510259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-87t62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b4be73eb-defb-4cc1-84f7-d34dccab4a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c77eb695dfea4ab3ef6fc3c580b15f8514469dfb763e71312cd0d8af5220b4,PodSandboxId:5ffc9af5a09ca02df9e225b28402cd2836c732b719c53140d257d07370e00499,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727207502133542728,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rw7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2916b6c-1a6f-4766-8543-0d846f559710,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ab49acc4ac79d8659cb62284f7467547eb4df2913391aed631e3f188dcc002,PodSandboxId:e8cac2baea09058ca0128707233d093926e9c131364d612ce42be4c8ad76189a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172720749146115744
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62de2daaf8dcc4fd39e199dadfa7cd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6ac8738592cf78c03eaa7ce93a5be3ee513801aef2e0e2ac506e5ec35e0faa,PodSandboxId:f22a07a10b746d1bb97d6836279f30266bdd6ad8d9fa270d11410225ea015ac3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17272074914
60897259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8463196c29ee74ccc6f7e94a4077ef38,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1c1d2106c8b28f603f7566861a48a593e9d1ae6c35e0bf44e73e504b1bf94f,PodSandboxId:d747699bf5a8abc81e9e969157f2e97051080b1170dd4664427ab5f86497008f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17272
07491384520046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e1955887103edb8159ea2696a6d8e57,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac621738ad1f0426836abac76c909ee1f89612ef5da06efddefce95137669ac3,PodSandboxId:e3abe44660030f1106549bdad21b9b6c675e95b70013c8621e653c1b2d805397,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207491339991882,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58152b24003559a219bc8b89415a2309d822726f957c2977aedfad7d8aea0c8d,PodSandboxId:a4917d1a9dab242c5a1b0f0dd14e1cc9750e3564aafc47b20188433d616cb9e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727207205036856114,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42e2b918-7f98-4027-88fb-d0f8d62b56cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.824256935Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c276a321-799c-4422-a57b-d984e05f324f name=/runtime.v1.RuntimeService/Version
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.827809839Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c276a321-799c-4422-a57b-d984e05f324f name=/runtime.v1.RuntimeService/Version
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.829338352Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc0f0f44-dbb2-44bc-9101-271d62db20e2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.830363862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208482830330655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc0f0f44-dbb2-44bc-9101-271d62db20e2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.831066752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99aa6e9b-40b8-484a-9bf4-29644bb1403d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.831204839Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99aa6e9b-40b8-484a-9bf4-29644bb1403d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.831772938Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b6f65eec9f0c856f644d68a54155ea53d7ba0b3c434a007ea245837106df31d,PodSandboxId:cf49c730126f643a8a6dd5613a5cca00ca5451e8cb1349320266552693f434fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207503369894604,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 591605b2-de7e-4dc1-903b-f8102ccc3770,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05c709fa2730944d3173932bdd1af233ff8b990def81020eb63ee86fe32998d,PodSandboxId:fac66c2115436eca468dfe2df0b7552e909963fb2629c9b05328fa60c1eb1429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207502851008630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzssp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecf276cd-9aa0-4a0b-81b6-da38271d10ed,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb4369fc1e40810036541e157b9cf7ae4c35088c5d16d996c74baff6dc4bfd7,PodSandboxId:189511ecad721347bcdd29ba2d05a1752862cdbb96a72cf5123d00b4f409a06e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207502729510259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-87t62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b4be73eb-defb-4cc1-84f7-d34dccab4a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c77eb695dfea4ab3ef6fc3c580b15f8514469dfb763e71312cd0d8af5220b4,PodSandboxId:5ffc9af5a09ca02df9e225b28402cd2836c732b719c53140d257d07370e00499,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727207502133542728,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rw7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2916b6c-1a6f-4766-8543-0d846f559710,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ab49acc4ac79d8659cb62284f7467547eb4df2913391aed631e3f188dcc002,PodSandboxId:e8cac2baea09058ca0128707233d093926e9c131364d612ce42be4c8ad76189a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172720749146115744
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62de2daaf8dcc4fd39e199dadfa7cd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6ac8738592cf78c03eaa7ce93a5be3ee513801aef2e0e2ac506e5ec35e0faa,PodSandboxId:f22a07a10b746d1bb97d6836279f30266bdd6ad8d9fa270d11410225ea015ac3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17272074914
60897259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8463196c29ee74ccc6f7e94a4077ef38,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1c1d2106c8b28f603f7566861a48a593e9d1ae6c35e0bf44e73e504b1bf94f,PodSandboxId:d747699bf5a8abc81e9e969157f2e97051080b1170dd4664427ab5f86497008f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17272
07491384520046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e1955887103edb8159ea2696a6d8e57,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac621738ad1f0426836abac76c909ee1f89612ef5da06efddefce95137669ac3,PodSandboxId:e3abe44660030f1106549bdad21b9b6c675e95b70013c8621e653c1b2d805397,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207491339991882,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58152b24003559a219bc8b89415a2309d822726f957c2977aedfad7d8aea0c8d,PodSandboxId:a4917d1a9dab242c5a1b0f0dd14e1cc9750e3564aafc47b20188433d616cb9e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727207205036856114,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99aa6e9b-40b8-484a-9bf4-29644bb1403d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.863013097Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d3d9688c-455f-46b0-b34d-fd7c3f2d580c name=/runtime.v1.RuntimeService/Version
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.863088404Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d3d9688c-455f-46b0-b34d-fd7c3f2d580c name=/runtime.v1.RuntimeService/Version
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.863980688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4fb990a5-c55e-45d1-9264-6244d12440f3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.864404208Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208482864380668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4fb990a5-c55e-45d1-9264-6244d12440f3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.864851960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b70cbe7-dba2-455a-899d-50176a56e797 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.864905091Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b70cbe7-dba2-455a-899d-50176a56e797 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:08:02 default-k8s-diff-port-093771 crio[705]: time="2024-09-24 20:08:02.865227417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b6f65eec9f0c856f644d68a54155ea53d7ba0b3c434a007ea245837106df31d,PodSandboxId:cf49c730126f643a8a6dd5613a5cca00ca5451e8cb1349320266552693f434fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207503369894604,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 591605b2-de7e-4dc1-903b-f8102ccc3770,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05c709fa2730944d3173932bdd1af233ff8b990def81020eb63ee86fe32998d,PodSandboxId:fac66c2115436eca468dfe2df0b7552e909963fb2629c9b05328fa60c1eb1429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207502851008630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzssp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecf276cd-9aa0-4a0b-81b6-da38271d10ed,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb4369fc1e40810036541e157b9cf7ae4c35088c5d16d996c74baff6dc4bfd7,PodSandboxId:189511ecad721347bcdd29ba2d05a1752862cdbb96a72cf5123d00b4f409a06e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207502729510259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-87t62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b4be73eb-defb-4cc1-84f7-d34dccab4a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c77eb695dfea4ab3ef6fc3c580b15f8514469dfb763e71312cd0d8af5220b4,PodSandboxId:5ffc9af5a09ca02df9e225b28402cd2836c732b719c53140d257d07370e00499,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727207502133542728,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rw7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2916b6c-1a6f-4766-8543-0d846f559710,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ab49acc4ac79d8659cb62284f7467547eb4df2913391aed631e3f188dcc002,PodSandboxId:e8cac2baea09058ca0128707233d093926e9c131364d612ce42be4c8ad76189a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172720749146115744
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe62de2daaf8dcc4fd39e199dadfa7cd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6ac8738592cf78c03eaa7ce93a5be3ee513801aef2e0e2ac506e5ec35e0faa,PodSandboxId:f22a07a10b746d1bb97d6836279f30266bdd6ad8d9fa270d11410225ea015ac3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17272074914
60897259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8463196c29ee74ccc6f7e94a4077ef38,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1c1d2106c8b28f603f7566861a48a593e9d1ae6c35e0bf44e73e504b1bf94f,PodSandboxId:d747699bf5a8abc81e9e969157f2e97051080b1170dd4664427ab5f86497008f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17272
07491384520046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e1955887103edb8159ea2696a6d8e57,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac621738ad1f0426836abac76c909ee1f89612ef5da06efddefce95137669ac3,PodSandboxId:e3abe44660030f1106549bdad21b9b6c675e95b70013c8621e653c1b2d805397,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207491339991882,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58152b24003559a219bc8b89415a2309d822726f957c2977aedfad7d8aea0c8d,PodSandboxId:a4917d1a9dab242c5a1b0f0dd14e1cc9750e3564aafc47b20188433d616cb9e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727207205036856114,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-093771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17f7297d4f93984fc9ad306bb059326,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b70cbe7-dba2-455a-899d-50176a56e797 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1b6f65eec9f0c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   cf49c730126f6       storage-provisioner
	d05c709fa2730       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   fac66c2115436       coredns-7c65d6cfc9-nzssp
	3cb4369fc1e40       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   189511ecad721       coredns-7c65d6cfc9-87t62
	d9c77eb695dfe       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 minutes ago      Running             kube-proxy                0                   5ffc9af5a09ca       kube-proxy-5rw7b
	32ab49acc4ac7       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 minutes ago      Running             kube-scheduler            2                   e8cac2baea090       kube-scheduler-default-k8s-diff-port-093771
	3e6ac8738592c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   16 minutes ago      Running             kube-controller-manager   2                   f22a07a10b746       kube-controller-manager-default-k8s-diff-port-093771
	ed1c1d2106c8b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   d747699bf5a8a       etcd-default-k8s-diff-port-093771
	ac621738ad1f0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 minutes ago      Running             kube-apiserver            2                   e3abe44660030       kube-apiserver-default-k8s-diff-port-093771
	58152b2400355       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   a4917d1a9dab2       kube-apiserver-default-k8s-diff-port-093771
	
	
	==> coredns [3cb4369fc1e40810036541e157b9cf7ae4c35088c5d16d996c74baff6dc4bfd7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d05c709fa2730944d3173932bdd1af233ff8b990def81020eb63ee86fe32998d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-093771
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-093771
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=default-k8s-diff-port-093771
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T19_51_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 19:51:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-093771
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 20:07:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 20:07:05 +0000   Tue, 24 Sep 2024 19:51:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 20:07:05 +0000   Tue, 24 Sep 2024 19:51:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 20:07:05 +0000   Tue, 24 Sep 2024 19:51:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 20:07:05 +0000   Tue, 24 Sep 2024 19:51:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.116
	  Hostname:    default-k8s-diff-port-093771
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 44d371c36b3f412b9fb6d4d146e398ef
	  System UUID:                44d371c3-6b3f-412b-9fb6-d4d146e398ef
	  Boot ID:                    f9efba96-f43f-40dd-8bcf-03c6890f483b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-87t62                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-nzssp                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-default-k8s-diff-port-093771                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-093771             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-093771    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-5rw7b                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-093771             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-gnlkd                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node default-k8s-diff-port-093771 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node default-k8s-diff-port-093771 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node default-k8s-diff-port-093771 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node default-k8s-diff-port-093771 event: Registered Node default-k8s-diff-port-093771 in Controller
	
	
	==> dmesg <==
	[  +0.047752] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037960] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.784276] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.853459] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.543282] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.500511] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.066876] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071532] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.209958] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.147921] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.310626] systemd-fstab-generator[696]: Ignoring "noauto" option for root device
	[  +4.142292] systemd-fstab-generator[787]: Ignoring "noauto" option for root device
	[  +1.913779] systemd-fstab-generator[905]: Ignoring "noauto" option for root device
	[  +0.058810] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.507566] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.178283] kauditd_printk_skb: 85 callbacks suppressed
	[Sep24 19:51] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.064068] systemd-fstab-generator[2582]: Ignoring "noauto" option for root device
	[  +4.688802] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.356125] systemd-fstab-generator[2898]: Ignoring "noauto" option for root device
	[  +5.365379] systemd-fstab-generator[3011]: Ignoring "noauto" option for root device
	[  +0.116364] kauditd_printk_skb: 14 callbacks suppressed
	[ +10.094216] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [ed1c1d2106c8b28f603f7566861a48a593e9d1ae6c35e0bf44e73e504b1bf94f] <==
	{"level":"info","ts":"2024-09-24T19:51:32.599990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70e810c2542c58a7 received MsgPreVoteResp from 70e810c2542c58a7 at term 1"}
	{"level":"info","ts":"2024-09-24T19:51:32.600003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70e810c2542c58a7 became candidate at term 2"}
	{"level":"info","ts":"2024-09-24T19:51:32.600009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70e810c2542c58a7 received MsgVoteResp from 70e810c2542c58a7 at term 2"}
	{"level":"info","ts":"2024-09-24T19:51:32.600017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70e810c2542c58a7 became leader at term 2"}
	{"level":"info","ts":"2024-09-24T19:51:32.600023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 70e810c2542c58a7 elected leader 70e810c2542c58a7 at term 2"}
	{"level":"info","ts":"2024-09-24T19:51:32.601377Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:51:32.602241Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"70e810c2542c58a7","local-member-attributes":"{Name:default-k8s-diff-port-093771 ClientURLs:[https://192.168.50.116:2379]}","request-path":"/0/members/70e810c2542c58a7/attributes","cluster-id":"938c7bbb9c530c74","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T19:51:32.602278Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:51:32.602642Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:51:32.603256Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:51:32.604025Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T19:51:32.604201Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T19:51:32.604226Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T19:51:32.604704Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:51:32.605372Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.116:2379"}
	{"level":"info","ts":"2024-09-24T19:51:32.605676Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"938c7bbb9c530c74","local-member-id":"70e810c2542c58a7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:51:32.605747Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:51:32.605778Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T20:01:32.638046Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":685}
	{"level":"info","ts":"2024-09-24T20:01:32.645886Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":685,"took":"7.572729ms","hash":3868663388,"current-db-size-bytes":2199552,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2199552,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-09-24T20:01:32.645936Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3868663388,"revision":685,"compact-revision":-1}
	{"level":"info","ts":"2024-09-24T20:06:32.651990Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":928}
	{"level":"info","ts":"2024-09-24T20:06:32.655322Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":928,"took":"2.981482ms","hash":4010575539,"current-db-size-bytes":2199552,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1523712,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-09-24T20:06:32.655402Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4010575539,"revision":928,"compact-revision":685}
	{"level":"info","ts":"2024-09-24T20:08:04.142427Z","caller":"traceutil/trace.go:171","msg":"trace[605504168] transaction","detail":"{read_only:false; response_revision:1247; number_of_response:1; }","duration":"119.630963ms","start":"2024-09-24T20:08:04.022772Z","end":"2024-09-24T20:08:04.142403Z","steps":["trace[605504168] 'process raft request'  (duration: 119.431303ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:08:05 up 21 min,  0 users,  load average: 0.02, 0.09, 0.09
	Linux default-k8s-diff-port-093771 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [58152b24003559a219bc8b89415a2309d822726f957c2977aedfad7d8aea0c8d] <==
	W0924 19:51:25.138321       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.191709       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.202230       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.277208       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.287975       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.307502       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.400804       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.404431       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.425166       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.453178       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.457541       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.486388       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.495454       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.505135       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.508746       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.509012       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.574106       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.643103       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.748840       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.794270       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.804044       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.843666       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:25.964297       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:26.002156       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:26.045876       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ac621738ad1f0426836abac76c909ee1f89612ef5da06efddefce95137669ac3] <==
	I0924 20:04:34.866563       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 20:04:34.866753       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 20:06:33.864124       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 20:06:33.864225       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0924 20:06:34.866308       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 20:06:34.866636       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0924 20:06:34.866522       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 20:06:34.866851       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 20:06:34.867884       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 20:06:34.867900       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 20:07:34.868891       1 handler_proxy.go:99] no RequestInfo found in the context
	W0924 20:07:34.868920       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 20:07:34.869081       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0924 20:07:34.869149       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 20:07:34.870261       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 20:07:34.870316       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3e6ac8738592cf78c03eaa7ce93a5be3ee513801aef2e0e2ac506e5ec35e0faa] <==
	I0924 20:02:41.374422       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 20:02:42.454286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="81.728µs"
	I0924 20:02:54.456835       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="79.469µs"
	E0924 20:03:10.809833       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:03:11.381124       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:03:40.815716       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:03:41.389105       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:04:10.821892       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:04:11.395478       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:04:40.827821       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:04:41.403038       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:05:10.833388       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:05:11.409482       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:05:40.839421       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:05:41.416660       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:06:10.845841       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:06:11.424302       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:06:40.852248       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:06:41.433443       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 20:07:05.741611       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-093771"
	E0924 20:07:10.858276       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:07:11.441084       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:07:40.864339       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:07:41.453729       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 20:07:53.451249       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="101.869µs"
	
	
	==> kube-proxy [d9c77eb695dfea4ab3ef6fc3c580b15f8514469dfb763e71312cd0d8af5220b4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 19:51:42.593855       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 19:51:42.636427       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.116"]
	E0924 19:51:42.636510       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 19:51:42.991513       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 19:51:42.991628       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 19:51:42.991659       1 server_linux.go:169] "Using iptables Proxier"
	I0924 19:51:43.083795       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 19:51:43.084099       1 server.go:483] "Version info" version="v1.31.1"
	I0924 19:51:43.084129       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:51:43.103202       1 config.go:199] "Starting service config controller"
	I0924 19:51:43.111795       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 19:51:43.109690       1 config.go:105] "Starting endpoint slice config controller"
	I0924 19:51:43.111904       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 19:51:43.110215       1 config.go:328] "Starting node config controller"
	I0924 19:51:43.111937       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 19:51:43.211946       1 shared_informer.go:320] Caches are synced for service config
	I0924 19:51:43.212019       1 shared_informer.go:320] Caches are synced for node config
	I0924 19:51:43.212030       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [32ab49acc4ac79d8659cb62284f7467547eb4df2913391aed631e3f188dcc002] <==
	W0924 19:51:33.944440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 19:51:33.944800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:33.944479       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 19:51:33.944857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:34.796264       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 19:51:34.796740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:34.803847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 19:51:34.803942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:34.816917       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 19:51:34.817153       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 19:51:34.826844       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 19:51:34.826922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:34.884961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0924 19:51:34.885026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:34.900565       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 19:51:34.900740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:34.999001       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 19:51:34.999265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:35.049340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 19:51:35.049536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:35.063557       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 19:51:35.063742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:51:35.086541       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 19:51:35.086795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0924 19:51:37.912977       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 20:07:02 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:02.438295    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gnlkd" podUID="a3b6c4f7-47e1-48a3-adff-1690db5cea3b"
	Sep 24 20:07:06 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:06.670453    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208426670205133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:06 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:06.670767    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208426670205133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:15 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:15.438406    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gnlkd" podUID="a3b6c4f7-47e1-48a3-adff-1690db5cea3b"
	Sep 24 20:07:16 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:16.674206    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208436673808182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:16 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:16.674248    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208436673808182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:26 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:26.675902    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208446675499569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:26 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:26.676329    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208446675499569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:29 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:29.438727    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gnlkd" podUID="a3b6c4f7-47e1-48a3-adff-1690db5cea3b"
	Sep 24 20:07:36 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:36.451565    2905 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 20:07:36 default-k8s-diff-port-093771 kubelet[2905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 20:07:36 default-k8s-diff-port-093771 kubelet[2905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 20:07:36 default-k8s-diff-port-093771 kubelet[2905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 20:07:36 default-k8s-diff-port-093771 kubelet[2905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 20:07:36 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:36.677661    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208456677346755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:36 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:36.677685    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208456677346755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:41 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:41.455353    2905 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 24 20:07:41 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:41.455413    2905 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 24 20:07:41 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:41.455536    2905 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhvdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-gnlkd_kube-system(a3b6c4f7-47e1-48a3-adff-1690db5cea3b): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 24 20:07:41 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:41.456928    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-gnlkd" podUID="a3b6c4f7-47e1-48a3-adff-1690db5cea3b"
	Sep 24 20:07:46 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:46.679651    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208466679134910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:46 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:46.680866    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208466679134910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:53 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:53.437991    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gnlkd" podUID="a3b6c4f7-47e1-48a3-adff-1690db5cea3b"
	Sep 24 20:07:56 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:56.683291    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208476682947819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:56 default-k8s-diff-port-093771 kubelet[2905]: E0924 20:07:56.683345    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208476682947819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [1b6f65eec9f0c856f644d68a54155ea53d7ba0b3c434a007ea245837106df31d] <==
	I0924 19:51:43.455202       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 19:51:43.467011       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 19:51:43.467057       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 19:51:43.477854       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 19:51:43.477967       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-093771_5ac3a173-2daf-4b53-9ddc-5e9a1d5f3f56!
	I0924 19:51:43.481940       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"afa2debb-26a6-4ab0-9784-2c276ac06b32", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-093771_5ac3a173-2daf-4b53-9ddc-5e9a1d5f3f56 became leader
	I0924 19:51:43.578958       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-093771_5ac3a173-2daf-4b53-9ddc-5e9a1d5f3f56!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-093771 -n default-k8s-diff-port-093771
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-093771 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-gnlkd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-093771 describe pod metrics-server-6867b74b74-gnlkd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-093771 describe pod metrics-server-6867b74b74-gnlkd: exit status 1 (77.366079ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-gnlkd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-093771 describe pod metrics-server-6867b74b74-gnlkd: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (428.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (359.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-311319 -n embed-certs-311319
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-24 20:07:24.259588672 +0000 UTC m=+6453.361600377
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-311319 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-311319 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.917µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-311319 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-311319 -n embed-certs-311319
E0924 20:07:24.266203   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-311319 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-311319 logs -n 25: (1.152836167s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:38 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo find                             | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo crio                             | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-038637                                       | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-119609 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | disable-driver-mounts-119609                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:39 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-311319            | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-965745             | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-093771  | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-510301        | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-311319                 | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-965745                  | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-093771       | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:51 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-510301             | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 20:06 UTC | 24 Sep 24 20:06 UTC |
	| start   | -p newest-cni-813973 --memory=2200 --alsologtostderr   | newest-cni-813973            | jenkins | v1.34.0 | 24 Sep 24 20:06 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 20:07 UTC | 24 Sep 24 20:07 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 20:06:38
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 20:06:38.553344   76425 out.go:345] Setting OutFile to fd 1 ...
	I0924 20:06:38.553575   76425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 20:06:38.553583   76425 out.go:358] Setting ErrFile to fd 2...
	I0924 20:06:38.553588   76425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 20:06:38.553810   76425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 20:06:38.554450   76425 out.go:352] Setting JSON to false
	I0924 20:06:38.555626   76425 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6550,"bootTime":1727201849,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 20:06:38.555737   76425 start.go:139] virtualization: kvm guest
	I0924 20:06:38.558092   76425 out.go:177] * [newest-cni-813973] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 20:06:38.559423   76425 notify.go:220] Checking for updates...
	I0924 20:06:38.559429   76425 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 20:06:38.561114   76425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 20:06:38.562298   76425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 20:06:38.563597   76425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 20:06:38.565041   76425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 20:06:38.566514   76425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 20:06:38.568194   76425 config.go:182] Loaded profile config "default-k8s-diff-port-093771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 20:06:38.568310   76425 config.go:182] Loaded profile config "embed-certs-311319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 20:06:38.568413   76425 config.go:182] Loaded profile config "no-preload-965745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 20:06:38.568549   76425 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 20:06:38.605849   76425 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 20:06:38.607105   76425 start.go:297] selected driver: kvm2
	I0924 20:06:38.607130   76425 start.go:901] validating driver "kvm2" against <nil>
	I0924 20:06:38.607144   76425 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 20:06:38.607905   76425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 20:06:38.607983   76425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 20:06:38.623524   76425 install.go:137] /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 20:06:38.623575   76425 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0924 20:06:38.623624   76425 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0924 20:06:38.623886   76425 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0924 20:06:38.623917   76425 cni.go:84] Creating CNI manager for ""
	I0924 20:06:38.623959   76425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 20:06:38.623968   76425 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 20:06:38.624011   76425 start.go:340] cluster config:
	{Name:newest-cni-813973 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-813973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 20:06:38.624096   76425 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 20:06:38.626062   76425 out.go:177] * Starting "newest-cni-813973" primary control-plane node in "newest-cni-813973" cluster
	I0924 20:06:38.627351   76425 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 20:06:38.627388   76425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 20:06:38.627395   76425 cache.go:56] Caching tarball of preloaded images
	I0924 20:06:38.627446   76425 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 20:06:38.627456   76425 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 20:06:38.627534   76425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/config.json ...
	I0924 20:06:38.627551   76425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/config.json: {Name:mkb1196762f4c9aa9a83bb92eee1f51551659007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:06:38.627672   76425 start.go:360] acquireMachinesLock for newest-cni-813973: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 20:06:38.627698   76425 start.go:364] duration metric: took 14.172µs to acquireMachinesLock for "newest-cni-813973"
	I0924 20:06:38.627713   76425 start.go:93] Provisioning new machine with config: &{Name:newest-cni-813973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-813973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 20:06:38.627768   76425 start.go:125] createHost starting for "" (driver="kvm2")
	I0924 20:06:38.629371   76425 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 20:06:38.629509   76425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 20:06:38.629546   76425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 20:06:38.645119   76425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46647
	I0924 20:06:38.645563   76425 main.go:141] libmachine: () Calling .GetVersion
	I0924 20:06:38.646112   76425 main.go:141] libmachine: Using API Version  1
	I0924 20:06:38.646132   76425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 20:06:38.646450   76425 main.go:141] libmachine: () Calling .GetMachineName
	I0924 20:06:38.646660   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetMachineName
	I0924 20:06:38.646795   76425 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:06:38.647004   76425 start.go:159] libmachine.API.Create for "newest-cni-813973" (driver="kvm2")
	I0924 20:06:38.647026   76425 client.go:168] LocalClient.Create starting
	I0924 20:06:38.647051   76425 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem
	I0924 20:06:38.647079   76425 main.go:141] libmachine: Decoding PEM data...
	I0924 20:06:38.647091   76425 main.go:141] libmachine: Parsing certificate...
	I0924 20:06:38.647131   76425 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem
	I0924 20:06:38.647150   76425 main.go:141] libmachine: Decoding PEM data...
	I0924 20:06:38.647182   76425 main.go:141] libmachine: Parsing certificate...
	I0924 20:06:38.647199   76425 main.go:141] libmachine: Running pre-create checks...
	I0924 20:06:38.647207   76425 main.go:141] libmachine: (newest-cni-813973) Calling .PreCreateCheck
	I0924 20:06:38.647534   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetConfigRaw
	I0924 20:06:38.647945   76425 main.go:141] libmachine: Creating machine...
	I0924 20:06:38.647963   76425 main.go:141] libmachine: (newest-cni-813973) Calling .Create
	I0924 20:06:38.648083   76425 main.go:141] libmachine: (newest-cni-813973) Creating KVM machine...
	I0924 20:06:38.649249   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found existing default KVM network
	I0924 20:06:38.650353   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:38.650202   76448 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:8b:ff:14} reservation:<nil>}
	I0924 20:06:38.651220   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:38.651154   76448 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c4:59:7d} reservation:<nil>}
	I0924 20:06:38.652002   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:38.651905   76448 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ca:32:b9} reservation:<nil>}
	I0924 20:06:38.653008   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:38.652952   76448 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028b890}
	I0924 20:06:38.653089   76425 main.go:141] libmachine: (newest-cni-813973) DBG | created network xml: 
	I0924 20:06:38.653110   76425 main.go:141] libmachine: (newest-cni-813973) DBG | <network>
	I0924 20:06:38.653121   76425 main.go:141] libmachine: (newest-cni-813973) DBG |   <name>mk-newest-cni-813973</name>
	I0924 20:06:38.653131   76425 main.go:141] libmachine: (newest-cni-813973) DBG |   <dns enable='no'/>
	I0924 20:06:38.653139   76425 main.go:141] libmachine: (newest-cni-813973) DBG |   
	I0924 20:06:38.653149   76425 main.go:141] libmachine: (newest-cni-813973) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0924 20:06:38.653156   76425 main.go:141] libmachine: (newest-cni-813973) DBG |     <dhcp>
	I0924 20:06:38.653164   76425 main.go:141] libmachine: (newest-cni-813973) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0924 20:06:38.653169   76425 main.go:141] libmachine: (newest-cni-813973) DBG |     </dhcp>
	I0924 20:06:38.653173   76425 main.go:141] libmachine: (newest-cni-813973) DBG |   </ip>
	I0924 20:06:38.653179   76425 main.go:141] libmachine: (newest-cni-813973) DBG |   
	I0924 20:06:38.653183   76425 main.go:141] libmachine: (newest-cni-813973) DBG | </network>
	I0924 20:06:38.653203   76425 main.go:141] libmachine: (newest-cni-813973) DBG | 
	I0924 20:06:38.658465   76425 main.go:141] libmachine: (newest-cni-813973) DBG | trying to create private KVM network mk-newest-cni-813973 192.168.72.0/24...
	I0924 20:06:38.729165   76425 main.go:141] libmachine: (newest-cni-813973) DBG | private KVM network mk-newest-cni-813973 192.168.72.0/24 created
	I0924 20:06:38.729252   76425 main.go:141] libmachine: (newest-cni-813973) Setting up store path in /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973 ...
	I0924 20:06:38.729276   76425 main.go:141] libmachine: (newest-cni-813973) Building disk image from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 20:06:38.729290   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:38.729216   76448 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 20:06:38.729426   76425 main.go:141] libmachine: (newest-cni-813973) Downloading /home/jenkins/minikube-integration/19700-3751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 20:06:38.981174   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:38.981033   76448 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa...
	I0924 20:06:39.153392   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:39.153281   76448 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/newest-cni-813973.rawdisk...
	I0924 20:06:39.153423   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Writing magic tar header
	I0924 20:06:39.153439   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Writing SSH key tar header
	I0924 20:06:39.153450   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:39.153400   76448 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973 ...
	I0924 20:06:39.153512   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973
	I0924 20:06:39.153543   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube/machines
	I0924 20:06:39.153558   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 20:06:39.153616   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19700-3751
	I0924 20:06:39.153629   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 20:06:39.153643   76425 main.go:141] libmachine: (newest-cni-813973) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973 (perms=drwx------)
	I0924 20:06:39.153655   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Checking permissions on dir: /home/jenkins
	I0924 20:06:39.153667   76425 main.go:141] libmachine: (newest-cni-813973) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube/machines (perms=drwxr-xr-x)
	I0924 20:06:39.153684   76425 main.go:141] libmachine: (newest-cni-813973) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751/.minikube (perms=drwxr-xr-x)
	I0924 20:06:39.153698   76425 main.go:141] libmachine: (newest-cni-813973) Setting executable bit set on /home/jenkins/minikube-integration/19700-3751 (perms=drwxrwxr-x)
	I0924 20:06:39.153738   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Checking permissions on dir: /home
	I0924 20:06:39.153770   76425 main.go:141] libmachine: (newest-cni-813973) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 20:06:39.153784   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Skipping /home - not owner
	I0924 20:06:39.153804   76425 main.go:141] libmachine: (newest-cni-813973) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 20:06:39.153812   76425 main.go:141] libmachine: (newest-cni-813973) Creating domain...
	I0924 20:06:39.154787   76425 main.go:141] libmachine: (newest-cni-813973) define libvirt domain using xml: 
	I0924 20:06:39.154804   76425 main.go:141] libmachine: (newest-cni-813973) <domain type='kvm'>
	I0924 20:06:39.154815   76425 main.go:141] libmachine: (newest-cni-813973)   <name>newest-cni-813973</name>
	I0924 20:06:39.154821   76425 main.go:141] libmachine: (newest-cni-813973)   <memory unit='MiB'>2200</memory>
	I0924 20:06:39.154854   76425 main.go:141] libmachine: (newest-cni-813973)   <vcpu>2</vcpu>
	I0924 20:06:39.154869   76425 main.go:141] libmachine: (newest-cni-813973)   <features>
	I0924 20:06:39.154882   76425 main.go:141] libmachine: (newest-cni-813973)     <acpi/>
	I0924 20:06:39.154892   76425 main.go:141] libmachine: (newest-cni-813973)     <apic/>
	I0924 20:06:39.154919   76425 main.go:141] libmachine: (newest-cni-813973)     <pae/>
	I0924 20:06:39.154943   76425 main.go:141] libmachine: (newest-cni-813973)     
	I0924 20:06:39.154954   76425 main.go:141] libmachine: (newest-cni-813973)   </features>
	I0924 20:06:39.154967   76425 main.go:141] libmachine: (newest-cni-813973)   <cpu mode='host-passthrough'>
	I0924 20:06:39.155002   76425 main.go:141] libmachine: (newest-cni-813973)   
	I0924 20:06:39.155029   76425 main.go:141] libmachine: (newest-cni-813973)   </cpu>
	I0924 20:06:39.155039   76425 main.go:141] libmachine: (newest-cni-813973)   <os>
	I0924 20:06:39.155046   76425 main.go:141] libmachine: (newest-cni-813973)     <type>hvm</type>
	I0924 20:06:39.155055   76425 main.go:141] libmachine: (newest-cni-813973)     <boot dev='cdrom'/>
	I0924 20:06:39.155070   76425 main.go:141] libmachine: (newest-cni-813973)     <boot dev='hd'/>
	I0924 20:06:39.155083   76425 main.go:141] libmachine: (newest-cni-813973)     <bootmenu enable='no'/>
	I0924 20:06:39.155092   76425 main.go:141] libmachine: (newest-cni-813973)   </os>
	I0924 20:06:39.155100   76425 main.go:141] libmachine: (newest-cni-813973)   <devices>
	I0924 20:06:39.155110   76425 main.go:141] libmachine: (newest-cni-813973)     <disk type='file' device='cdrom'>
	I0924 20:06:39.155124   76425 main.go:141] libmachine: (newest-cni-813973)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/boot2docker.iso'/>
	I0924 20:06:39.155135   76425 main.go:141] libmachine: (newest-cni-813973)       <target dev='hdc' bus='scsi'/>
	I0924 20:06:39.155144   76425 main.go:141] libmachine: (newest-cni-813973)       <readonly/>
	I0924 20:06:39.155157   76425 main.go:141] libmachine: (newest-cni-813973)     </disk>
	I0924 20:06:39.155168   76425 main.go:141] libmachine: (newest-cni-813973)     <disk type='file' device='disk'>
	I0924 20:06:39.155180   76425 main.go:141] libmachine: (newest-cni-813973)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 20:06:39.155197   76425 main.go:141] libmachine: (newest-cni-813973)       <source file='/home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/newest-cni-813973.rawdisk'/>
	I0924 20:06:39.155208   76425 main.go:141] libmachine: (newest-cni-813973)       <target dev='hda' bus='virtio'/>
	I0924 20:06:39.155219   76425 main.go:141] libmachine: (newest-cni-813973)     </disk>
	I0924 20:06:39.155233   76425 main.go:141] libmachine: (newest-cni-813973)     <interface type='network'>
	I0924 20:06:39.155245   76425 main.go:141] libmachine: (newest-cni-813973)       <source network='mk-newest-cni-813973'/>
	I0924 20:06:39.155258   76425 main.go:141] libmachine: (newest-cni-813973)       <model type='virtio'/>
	I0924 20:06:39.155270   76425 main.go:141] libmachine: (newest-cni-813973)     </interface>
	I0924 20:06:39.155279   76425 main.go:141] libmachine: (newest-cni-813973)     <interface type='network'>
	I0924 20:06:39.155287   76425 main.go:141] libmachine: (newest-cni-813973)       <source network='default'/>
	I0924 20:06:39.155297   76425 main.go:141] libmachine: (newest-cni-813973)       <model type='virtio'/>
	I0924 20:06:39.155306   76425 main.go:141] libmachine: (newest-cni-813973)     </interface>
	I0924 20:06:39.155316   76425 main.go:141] libmachine: (newest-cni-813973)     <serial type='pty'>
	I0924 20:06:39.155324   76425 main.go:141] libmachine: (newest-cni-813973)       <target port='0'/>
	I0924 20:06:39.155333   76425 main.go:141] libmachine: (newest-cni-813973)     </serial>
	I0924 20:06:39.155342   76425 main.go:141] libmachine: (newest-cni-813973)     <console type='pty'>
	I0924 20:06:39.155353   76425 main.go:141] libmachine: (newest-cni-813973)       <target type='serial' port='0'/>
	I0924 20:06:39.155363   76425 main.go:141] libmachine: (newest-cni-813973)     </console>
	I0924 20:06:39.155377   76425 main.go:141] libmachine: (newest-cni-813973)     <rng model='virtio'>
	I0924 20:06:39.155394   76425 main.go:141] libmachine: (newest-cni-813973)       <backend model='random'>/dev/random</backend>
	I0924 20:06:39.155404   76425 main.go:141] libmachine: (newest-cni-813973)     </rng>
	I0924 20:06:39.155411   76425 main.go:141] libmachine: (newest-cni-813973)     
	I0924 20:06:39.155420   76425 main.go:141] libmachine: (newest-cni-813973)     
	I0924 20:06:39.155427   76425 main.go:141] libmachine: (newest-cni-813973)   </devices>
	I0924 20:06:39.155439   76425 main.go:141] libmachine: (newest-cni-813973) </domain>
	I0924 20:06:39.155470   76425 main.go:141] libmachine: (newest-cni-813973) 
	I0924 20:06:39.159726   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:b3:53:a8 in network default
	I0924 20:06:39.160410   76425 main.go:141] libmachine: (newest-cni-813973) Ensuring networks are active...
	I0924 20:06:39.160427   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:39.161251   76425 main.go:141] libmachine: (newest-cni-813973) Ensuring network default is active
	I0924 20:06:39.161599   76425 main.go:141] libmachine: (newest-cni-813973) Ensuring network mk-newest-cni-813973 is active
	I0924 20:06:39.162092   76425 main.go:141] libmachine: (newest-cni-813973) Getting domain xml...
	I0924 20:06:39.162781   76425 main.go:141] libmachine: (newest-cni-813973) Creating domain...
	I0924 20:06:40.398242   76425 main.go:141] libmachine: (newest-cni-813973) Waiting to get IP...
	I0924 20:06:40.399056   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:40.399428   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:40.399477   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:40.399433   76448 retry.go:31] will retry after 267.563635ms: waiting for machine to come up
	I0924 20:06:40.668985   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:40.669537   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:40.669573   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:40.669494   76448 retry.go:31] will retry after 317.275135ms: waiting for machine to come up
	I0924 20:06:40.987807   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:40.988375   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:40.988396   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:40.988337   76448 retry.go:31] will retry after 338.545245ms: waiting for machine to come up
	I0924 20:06:41.328732   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:41.329217   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:41.329242   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:41.329168   76448 retry.go:31] will retry after 380.674308ms: waiting for machine to come up
	I0924 20:06:41.711843   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:41.712301   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:41.712345   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:41.712276   76448 retry.go:31] will retry after 697.511199ms: waiting for machine to come up
	I0924 20:06:42.411234   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:42.411714   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:42.411742   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:42.411674   76448 retry.go:31] will retry after 769.238862ms: waiting for machine to come up
	I0924 20:06:43.182759   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:43.183241   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:43.183266   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:43.183187   76448 retry.go:31] will retry after 740.100584ms: waiting for machine to come up
	I0924 20:06:43.924193   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:43.924619   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:43.924647   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:43.924577   76448 retry.go:31] will retry after 1.472622128s: waiting for machine to come up
	I0924 20:06:45.398527   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:45.399072   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:45.399097   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:45.399028   76448 retry.go:31] will retry after 1.125610234s: waiting for machine to come up
	I0924 20:06:46.526386   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:46.526930   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:46.526972   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:46.526895   76448 retry.go:31] will retry after 2.047140109s: waiting for machine to come up
	I0924 20:06:48.575969   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:48.576384   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:48.576402   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:48.576355   76448 retry.go:31] will retry after 2.412422032s: waiting for machine to come up
	I0924 20:06:50.991542   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:50.992043   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:50.992068   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:50.991993   76448 retry.go:31] will retry after 2.278571042s: waiting for machine to come up
	I0924 20:06:53.271829   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:53.272246   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:53.272266   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:53.272215   76448 retry.go:31] will retry after 4.30479683s: waiting for machine to come up
	I0924 20:06:57.581883   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:06:57.582356   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find current IP address of domain newest-cni-813973 in network mk-newest-cni-813973
	I0924 20:06:57.582401   76425 main.go:141] libmachine: (newest-cni-813973) DBG | I0924 20:06:57.582324   76448 retry.go:31] will retry after 4.135199459s: waiting for machine to come up
	I0924 20:07:01.720860   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:01.721263   76425 main.go:141] libmachine: (newest-cni-813973) Found IP for machine: 192.168.72.187
	I0924 20:07:01.721299   76425 main.go:141] libmachine: (newest-cni-813973) Reserving static IP address...
	I0924 20:07:01.721313   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has current primary IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:01.721643   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find host DHCP lease matching {name: "newest-cni-813973", mac: "52:54:00:ae:f7:44", ip: "192.168.72.187"} in network mk-newest-cni-813973
	I0924 20:07:01.798268   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Getting to WaitForSSH function...
	I0924 20:07:01.798297   76425 main.go:141] libmachine: (newest-cni-813973) Reserved static IP address: 192.168.72.187
	I0924 20:07:01.798310   76425 main.go:141] libmachine: (newest-cni-813973) Waiting for SSH to be available...
	I0924 20:07:01.801159   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:01.801553   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973
	I0924 20:07:01.801584   76425 main.go:141] libmachine: (newest-cni-813973) DBG | unable to find defined IP address of network mk-newest-cni-813973 interface with MAC address 52:54:00:ae:f7:44
	I0924 20:07:01.801697   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Using SSH client type: external
	I0924 20:07:01.801729   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa (-rw-------)
	I0924 20:07:01.801808   76425 main.go:141] libmachine: (newest-cni-813973) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 20:07:01.801830   76425 main.go:141] libmachine: (newest-cni-813973) DBG | About to run SSH command:
	I0924 20:07:01.801857   76425 main.go:141] libmachine: (newest-cni-813973) DBG | exit 0
	I0924 20:07:01.805619   76425 main.go:141] libmachine: (newest-cni-813973) DBG | SSH cmd err, output: exit status 255: 
	I0924 20:07:01.805637   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0924 20:07:01.805647   76425 main.go:141] libmachine: (newest-cni-813973) DBG | command : exit 0
	I0924 20:07:01.805655   76425 main.go:141] libmachine: (newest-cni-813973) DBG | err     : exit status 255
	I0924 20:07:01.805665   76425 main.go:141] libmachine: (newest-cni-813973) DBG | output  : 
	I0924 20:07:04.806349   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Getting to WaitForSSH function...
	I0924 20:07:04.809046   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:04.809373   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:04.809403   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:04.809538   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Using SSH client type: external
	I0924 20:07:04.809562   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa (-rw-------)
	I0924 20:07:04.809621   76425 main.go:141] libmachine: (newest-cni-813973) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 20:07:04.809638   76425 main.go:141] libmachine: (newest-cni-813973) DBG | About to run SSH command:
	I0924 20:07:04.809650   76425 main.go:141] libmachine: (newest-cni-813973) DBG | exit 0
	I0924 20:07:04.934504   76425 main.go:141] libmachine: (newest-cni-813973) DBG | SSH cmd err, output: <nil>: 
	I0924 20:07:04.934777   76425 main.go:141] libmachine: (newest-cni-813973) KVM machine creation complete!
	I0924 20:07:04.935148   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetConfigRaw
	I0924 20:07:04.935709   76425 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:04.935883   76425 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:04.936064   76425 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 20:07:04.936081   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetState
	I0924 20:07:04.937339   76425 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 20:07:04.937354   76425 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 20:07:04.937361   76425 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 20:07:04.937367   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:04.939869   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:04.940243   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:04.940271   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:04.940409   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:04.940589   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:04.940757   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:04.940904   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:04.941069   76425 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:04.941273   76425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:04.941290   76425 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 20:07:05.041786   76425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 20:07:05.041811   76425 main.go:141] libmachine: Detecting the provisioner...
	I0924 20:07:05.041821   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:05.044571   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.045051   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:05.045078   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.045405   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:05.046048   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.046364   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.046950   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:05.047182   76425 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:05.047362   76425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:05.047374   76425 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 20:07:05.151242   76425 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 20:07:05.151392   76425 main.go:141] libmachine: found compatible host: buildroot
	I0924 20:07:05.151408   76425 main.go:141] libmachine: Provisioning with buildroot...
	I0924 20:07:05.151420   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetMachineName
	I0924 20:07:05.151666   76425 buildroot.go:166] provisioning hostname "newest-cni-813973"
	I0924 20:07:05.151702   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetMachineName
	I0924 20:07:05.151893   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:05.154418   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.154793   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:05.154817   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.155016   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:05.155202   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.155342   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.155484   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:05.155787   76425 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:05.155967   76425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:05.155980   76425 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-813973 && echo "newest-cni-813973" | sudo tee /etc/hostname
	I0924 20:07:05.272474   76425 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-813973
	
	I0924 20:07:05.272497   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:05.275218   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.275579   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:05.275608   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.275763   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:05.275937   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.276103   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.276218   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:05.276367   76425 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:05.276525   76425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:05.276539   76425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-813973' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-813973/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-813973' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 20:07:05.386645   76425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 20:07:05.386680   76425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 20:07:05.386699   76425 buildroot.go:174] setting up certificates
	I0924 20:07:05.386708   76425 provision.go:84] configureAuth start
	I0924 20:07:05.386717   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetMachineName
	I0924 20:07:05.387014   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetIP
	I0924 20:07:05.389766   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.390090   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:05.390114   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.390231   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:05.392385   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.392665   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:05.392693   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.392805   76425 provision.go:143] copyHostCerts
	I0924 20:07:05.392880   76425 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 20:07:05.392893   76425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 20:07:05.392968   76425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 20:07:05.393075   76425 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 20:07:05.393086   76425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 20:07:05.393122   76425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 20:07:05.393197   76425 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 20:07:05.393206   76425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 20:07:05.393241   76425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 20:07:05.393301   76425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.newest-cni-813973 san=[127.0.0.1 192.168.72.187 localhost minikube newest-cni-813973]
	I0924 20:07:05.671120   76425 provision.go:177] copyRemoteCerts
	I0924 20:07:05.671197   76425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 20:07:05.671227   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:05.674366   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.674669   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:05.674696   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.674872   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:05.675075   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.675284   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:05.675432   76425 sshutil.go:53] new ssh client: &{IP:192.168.72.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa Username:docker}
	I0924 20:07:05.756026   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 20:07:05.784414   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 20:07:05.807253   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 20:07:05.830848   76425 provision.go:87] duration metric: took 444.109633ms to configureAuth
	I0924 20:07:05.830881   76425 buildroot.go:189] setting minikube options for container-runtime
	I0924 20:07:05.831064   76425 config.go:182] Loaded profile config "newest-cni-813973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 20:07:05.831133   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:05.833541   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.833869   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:05.833887   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:05.834065   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:05.834234   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.834369   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:05.834500   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:05.834637   76425 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:05.834798   76425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:05.834813   76425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 20:07:06.051828   76425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 20:07:06.051854   76425 main.go:141] libmachine: Checking connection to Docker...
	I0924 20:07:06.051865   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetURL
	I0924 20:07:06.053100   76425 main.go:141] libmachine: (newest-cni-813973) DBG | Using libvirt version 6000000
	I0924 20:07:06.055625   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.056002   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:06.056028   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.056168   76425 main.go:141] libmachine: Docker is up and running!
	I0924 20:07:06.056181   76425 main.go:141] libmachine: Reticulating splines...
	I0924 20:07:06.056196   76425 client.go:171] duration metric: took 27.40916404s to LocalClient.Create
	I0924 20:07:06.056228   76425 start.go:167] duration metric: took 27.409224483s to libmachine.API.Create "newest-cni-813973"
	I0924 20:07:06.056240   76425 start.go:293] postStartSetup for "newest-cni-813973" (driver="kvm2")
	I0924 20:07:06.056252   76425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 20:07:06.056277   76425 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:06.056537   76425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 20:07:06.056566   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:06.058860   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.059141   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:06.059169   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.059273   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:06.059444   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:06.059598   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:06.059751   76425 sshutil.go:53] new ssh client: &{IP:192.168.72.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa Username:docker}
	I0924 20:07:06.144646   76425 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 20:07:06.148803   76425 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 20:07:06.148836   76425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 20:07:06.148927   76425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 20:07:06.149051   76425 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 20:07:06.149151   76425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 20:07:06.158045   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 20:07:06.180901   76425 start.go:296] duration metric: took 124.646985ms for postStartSetup
	I0924 20:07:06.180962   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetConfigRaw
	I0924 20:07:06.181638   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetIP
	I0924 20:07:06.184477   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.184843   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:06.184870   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.185071   76425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/config.json ...
	I0924 20:07:06.185311   76425 start.go:128] duration metric: took 27.557534549s to createHost
	I0924 20:07:06.185336   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:06.187604   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.187973   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:06.187993   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.188130   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:06.188318   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:06.188504   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:06.188668   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:06.188846   76425 main.go:141] libmachine: Using SSH client type: native
	I0924 20:07:06.189042   76425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0924 20:07:06.189057   76425 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 20:07:06.291287   76425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727208426.256685486
	
	I0924 20:07:06.291313   76425 fix.go:216] guest clock: 1727208426.256685486
	I0924 20:07:06.291324   76425 fix.go:229] Guest: 2024-09-24 20:07:06.256685486 +0000 UTC Remote: 2024-09-24 20:07:06.185324618 +0000 UTC m=+27.669581975 (delta=71.360868ms)
	I0924 20:07:06.291349   76425 fix.go:200] guest clock delta is within tolerance: 71.360868ms
	I0924 20:07:06.291356   76425 start.go:83] releasing machines lock for "newest-cni-813973", held for 27.663648785s
	I0924 20:07:06.291380   76425 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:06.291691   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetIP
	I0924 20:07:06.294218   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.294619   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:06.294645   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.294862   76425 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:06.295321   76425 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:06.295545   76425 main.go:141] libmachine: (newest-cni-813973) Calling .DriverName
	I0924 20:07:06.295642   76425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 20:07:06.295695   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:06.295783   76425 ssh_runner.go:195] Run: cat /version.json
	I0924 20:07:06.295810   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHHostname
	I0924 20:07:06.298492   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.298570   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.298818   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:06.298865   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.298893   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:06.298908   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:06.299009   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:06.299143   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHPort
	I0924 20:07:06.299210   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:06.299327   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHKeyPath
	I0924 20:07:06.299391   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:06.299519   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetSSHUsername
	I0924 20:07:06.299525   76425 sshutil.go:53] new ssh client: &{IP:192.168.72.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa Username:docker}
	I0924 20:07:06.299628   76425 sshutil.go:53] new ssh client: &{IP:192.168.72.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/newest-cni-813973/id_rsa Username:docker}
	I0924 20:07:06.394621   76425 ssh_runner.go:195] Run: systemctl --version
	I0924 20:07:06.400556   76425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 20:07:06.553107   76425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 20:07:06.559200   76425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 20:07:06.559274   76425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 20:07:06.575699   76425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 20:07:06.575724   76425 start.go:495] detecting cgroup driver to use...
	I0924 20:07:06.575801   76425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 20:07:06.594770   76425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 20:07:06.608944   76425 docker.go:217] disabling cri-docker service (if available) ...
	I0924 20:07:06.609011   76425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 20:07:06.622604   76425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 20:07:06.636741   76425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 20:07:06.760362   76425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 20:07:06.911303   76425 docker.go:233] disabling docker service ...
	I0924 20:07:06.911376   76425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 20:07:06.925636   76425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 20:07:06.937650   76425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 20:07:07.092600   76425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 20:07:07.229337   76425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 20:07:07.242703   76425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 20:07:07.261191   76425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 20:07:07.261266   76425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:07.272213   76425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 20:07:07.272274   76425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:07.283467   76425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:07.293967   76425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:07.304422   76425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 20:07:07.314190   76425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:07.323587   76425 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:07.339697   76425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 20:07:07.349696   76425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 20:07:07.359128   76425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 20:07:07.359191   76425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 20:07:07.371644   76425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 20:07:07.380510   76425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 20:07:07.503465   76425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 20:07:07.596017   76425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 20:07:07.596095   76425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 20:07:07.600545   76425 start.go:563] Will wait 60s for crictl version
	I0924 20:07:07.600605   76425 ssh_runner.go:195] Run: which crictl
	I0924 20:07:07.604927   76425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 20:07:07.638945   76425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 20:07:07.639029   76425 ssh_runner.go:195] Run: crio --version
	I0924 20:07:07.665434   76425 ssh_runner.go:195] Run: crio --version
	I0924 20:07:07.692131   76425 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 20:07:07.693523   76425 main.go:141] libmachine: (newest-cni-813973) Calling .GetIP
	I0924 20:07:07.696344   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:07.696730   76425 main.go:141] libmachine: (newest-cni-813973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:f7:44", ip: ""} in network mk-newest-cni-813973: {Iface:virbr2 ExpiryTime:2024-09-24 21:06:52 +0000 UTC Type:0 Mac:52:54:00:ae:f7:44 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:newest-cni-813973 Clientid:01:52:54:00:ae:f7:44}
	I0924 20:07:07.696755   76425 main.go:141] libmachine: (newest-cni-813973) DBG | domain newest-cni-813973 has defined IP address 192.168.72.187 and MAC address 52:54:00:ae:f7:44 in network mk-newest-cni-813973
	I0924 20:07:07.696942   76425 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0924 20:07:07.700787   76425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 20:07:07.713808   76425 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0924 20:07:07.715312   76425 kubeadm.go:883] updating cluster {Name:newest-cni-813973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-813973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.187 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 20:07:07.715414   76425 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 20:07:07.715473   76425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 20:07:07.746745   76425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 20:07:07.746822   76425 ssh_runner.go:195] Run: which lz4
	I0924 20:07:07.750510   76425 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 20:07:07.754197   76425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 20:07:07.754230   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 20:07:08.919456   76425 crio.go:462] duration metric: took 1.168988302s to copy over tarball
	I0924 20:07:08.919520   76425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 20:07:10.889988   76425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.970442012s)
	I0924 20:07:10.890012   76425 crio.go:469] duration metric: took 1.970532772s to extract the tarball
	I0924 20:07:10.890021   76425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 20:07:10.927914   76425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 20:07:10.978115   76425 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 20:07:10.978134   76425 cache_images.go:84] Images are preloaded, skipping loading
	I0924 20:07:10.978142   76425 kubeadm.go:934] updating node { 192.168.72.187 8443 v1.31.1 crio true true} ...
	I0924 20:07:10.978266   76425 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-813973 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-813973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 20:07:10.978362   76425 ssh_runner.go:195] Run: crio config
	I0924 20:07:11.036930   76425 cni.go:84] Creating CNI manager for ""
	I0924 20:07:11.036951   76425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 20:07:11.036959   76425 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0924 20:07:11.036979   76425 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.187 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-813973 NodeName:newest-cni-813973 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 20:07:11.037106   76425 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-813973"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.187
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.187"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 20:07:11.037163   76425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 20:07:11.047054   76425 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 20:07:11.047128   76425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 20:07:11.056883   76425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0924 20:07:11.074947   76425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 20:07:11.090734   76425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0924 20:07:11.106539   76425 ssh_runner.go:195] Run: grep 192.168.72.187	control-plane.minikube.internal$ /etc/hosts
	I0924 20:07:11.110030   76425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.187	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 20:07:11.121758   76425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 20:07:11.245501   76425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 20:07:11.261204   76425 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973 for IP: 192.168.72.187
	I0924 20:07:11.261226   76425 certs.go:194] generating shared ca certs ...
	I0924 20:07:11.261246   76425 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:07:11.261454   76425 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 20:07:11.261519   76425 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 20:07:11.261536   76425 certs.go:256] generating profile certs ...
	I0924 20:07:11.261597   76425 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/client.key
	I0924 20:07:11.261626   76425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/client.crt with IP's: []
	I0924 20:07:11.445372   76425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/client.crt ...
	I0924 20:07:11.445399   76425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/client.crt: {Name:mk02198f79bf57c260d26b734ea22aa8f3f628e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:07:11.445593   76425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/client.key ...
	I0924 20:07:11.445607   76425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/client.key: {Name:mk76c67ce818e99f0f77f95f697fcab0ea369953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:07:11.445715   76425 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.key.da78465f
	I0924 20:07:11.445738   76425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.crt.da78465f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.187]
	I0924 20:07:11.549177   76425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.crt.da78465f ...
	I0924 20:07:11.549206   76425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.crt.da78465f: {Name:mke312c4bd31b33be33a00ef285941d3b770c863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:07:11.549373   76425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.key.da78465f ...
	I0924 20:07:11.549391   76425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.key.da78465f: {Name:mkc945c031c99d449634672489a28148f09be903 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:07:11.549484   76425 certs.go:381] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.crt.da78465f -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.crt
	I0924 20:07:11.549605   76425 certs.go:385] copying /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.key.da78465f -> /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.key
	I0924 20:07:11.549688   76425 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.key
	I0924 20:07:11.549707   76425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.crt with IP's: []
	I0924 20:07:12.065999   76425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.crt ...
	I0924 20:07:12.066031   76425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.crt: {Name:mkd8a9abf68ce58cdf1cea7c32a6baf88e34e862 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:07:12.066194   76425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.key ...
	I0924 20:07:12.066207   76425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.key: {Name:mkef3e0872e81e61a1d34234f751fb86d1a18ce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 20:07:12.066375   76425 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 20:07:12.066411   76425 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 20:07:12.066421   76425 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 20:07:12.066445   76425 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 20:07:12.066472   76425 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 20:07:12.066494   76425 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 20:07:12.066528   76425 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 20:07:12.067067   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 20:07:12.103398   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 20:07:12.128148   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 20:07:12.152344   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 20:07:12.175290   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 20:07:12.198416   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 20:07:12.221995   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 20:07:12.244836   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/newest-cni-813973/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 20:07:12.268723   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 20:07:12.290392   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 20:07:12.312901   76425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 20:07:12.335124   76425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 20:07:12.351527   76425 ssh_runner.go:195] Run: openssl version
	I0924 20:07:12.357263   76425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 20:07:12.368584   76425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 20:07:12.372680   76425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 20:07:12.372731   76425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 20:07:12.378250   76425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 20:07:12.389858   76425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 20:07:12.401969   76425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 20:07:12.407422   76425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 20:07:12.407492   76425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 20:07:12.413535   76425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 20:07:12.423829   76425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 20:07:12.433910   76425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 20:07:12.438013   76425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 20:07:12.438081   76425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 20:07:12.443963   76425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 20:07:12.454264   76425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 20:07:12.457882   76425 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 20:07:12.457932   76425 kubeadm.go:392] StartCluster: {Name:newest-cni-813973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-813973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.187 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 20:07:12.458015   76425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 20:07:12.458082   76425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 20:07:12.493508   76425 cri.go:89] found id: ""
	I0924 20:07:12.493587   76425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 20:07:12.503790   76425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 20:07:12.513883   76425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 20:07:12.522915   76425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 20:07:12.522937   76425 kubeadm.go:157] found existing configuration files:
	
	I0924 20:07:12.522979   76425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 20:07:12.532237   76425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 20:07:12.532317   76425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 20:07:12.542296   76425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 20:07:12.551220   76425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 20:07:12.551287   76425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 20:07:12.560206   76425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 20:07:12.568894   76425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 20:07:12.568966   76425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 20:07:12.578037   76425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 20:07:12.586622   76425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 20:07:12.586682   76425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 20:07:12.595625   76425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 20:07:12.693415   76425 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 20:07:12.693631   76425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 20:07:12.799057   76425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 20:07:12.799235   76425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 20:07:12.799367   76425 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 20:07:12.811671   76425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 20:07:13.106703   76425 out.go:235]   - Generating certificates and keys ...
	I0924 20:07:13.106814   76425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 20:07:13.106896   76425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 20:07:13.106986   76425 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 20:07:13.306654   76425 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 20:07:13.465032   76425 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 20:07:13.589548   76425 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 20:07:13.796618   76425 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 20:07:13.796875   76425 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-813973] and IPs [192.168.72.187 127.0.0.1 ::1]
	I0924 20:07:13.906782   76425 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 20:07:13.907071   76425 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-813973] and IPs [192.168.72.187 127.0.0.1 ::1]
	I0924 20:07:13.974617   76425 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 20:07:14.051234   76425 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 20:07:14.345304   76425 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 20:07:14.345503   76425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 20:07:14.516228   76425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 20:07:14.614507   76425 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 20:07:14.742285   76425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 20:07:14.939001   76425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 20:07:15.079443   76425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 20:07:15.080248   76425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 20:07:15.083403   76425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 20:07:15.085524   76425 out.go:235]   - Booting up control plane ...
	I0924 20:07:15.085632   76425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 20:07:15.085700   76425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 20:07:15.088634   76425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 20:07:15.113973   76425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 20:07:15.121311   76425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 20:07:15.121387   76425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 20:07:15.283199   76425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 20:07:15.283308   76425 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 20:07:15.784295   76425 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.411847ms
	I0924 20:07:15.784400   76425 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 20:07:20.784402   76425 kubeadm.go:310] [api-check] The API server is healthy after 5.001881692s
	I0924 20:07:20.797902   76425 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 20:07:20.815697   76425 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 20:07:20.846668   76425 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 20:07:20.846926   76425 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-813973 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 20:07:20.859423   76425 kubeadm.go:310] [bootstrap-token] Using token: irxg93.qdmet1poof6a021o
	I0924 20:07:20.860861   76425 out.go:235]   - Configuring RBAC rules ...
	I0924 20:07:20.860985   76425 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 20:07:20.867134   76425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 20:07:20.877046   76425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 20:07:20.882293   76425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 20:07:20.893162   76425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 20:07:20.897615   76425 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 20:07:21.194512   76425 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 20:07:21.616805   76425 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 20:07:22.193907   76425 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 20:07:22.195170   76425 kubeadm.go:310] 
	I0924 20:07:22.195289   76425 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 20:07:22.195307   76425 kubeadm.go:310] 
	I0924 20:07:22.195409   76425 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 20:07:22.195428   76425 kubeadm.go:310] 
	I0924 20:07:22.195465   76425 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 20:07:22.195542   76425 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 20:07:22.195626   76425 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 20:07:22.195643   76425 kubeadm.go:310] 
	I0924 20:07:22.195724   76425 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 20:07:22.195737   76425 kubeadm.go:310] 
	I0924 20:07:22.195809   76425 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 20:07:22.195822   76425 kubeadm.go:310] 
	I0924 20:07:22.195899   76425 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 20:07:22.196012   76425 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 20:07:22.196111   76425 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 20:07:22.196124   76425 kubeadm.go:310] 
	I0924 20:07:22.196251   76425 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 20:07:22.196317   76425 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 20:07:22.196323   76425 kubeadm.go:310] 
	I0924 20:07:22.196418   76425 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token irxg93.qdmet1poof6a021o \
	I0924 20:07:22.196559   76425 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 20:07:22.196592   76425 kubeadm.go:310] 	--control-plane 
	I0924 20:07:22.196602   76425 kubeadm.go:310] 
	I0924 20:07:22.196684   76425 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 20:07:22.196696   76425 kubeadm.go:310] 
	I0924 20:07:22.196765   76425 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token irxg93.qdmet1poof6a021o \
	I0924 20:07:22.196869   76425 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 20:07:22.197757   76425 kubeadm.go:310] W0924 20:07:12.656079     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 20:07:22.198052   76425 kubeadm.go:310] W0924 20:07:12.656789     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 20:07:22.198153   76425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 20:07:22.198187   76425 cni.go:84] Creating CNI manager for ""
	I0924 20:07:22.198197   76425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 20:07:22.200988   76425 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 20:07:22.202197   76425 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 20:07:22.213416   76425 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 20:07:22.235955   76425 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 20:07:22.236064   76425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 20:07:22.236091   76425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-813973 minikube.k8s.io/updated_at=2024_09_24T20_07_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=newest-cni-813973 minikube.k8s.io/primary=true
	I0924 20:07:22.273934   76425 ops.go:34] apiserver oom_adj: -16
	I0924 20:07:22.426560   76425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 20:07:22.926950   76425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 20:07:23.427428   76425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.850513781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208444850492514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6740ba57-0b6a-492b-8afb-adb1a571ce2b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.851047729Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26abe4d7-111a-436a-9942-5ae5a662a650 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.851104205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26abe4d7-111a-436a-9942-5ae5a662a650 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.851284652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34839ea54a6890ea675261ba9af3170d6b99038780d665abd04b35d45bb48f6f,PodSandboxId:131d8a27413b9da4c76720aed269dc33dcfe9410d87c9d9d4bf2bb4c6e50cc00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207536445414910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 766bdfe2-684a-47de-94fd-088795b60e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc98dcca72ffe9c57e5273f7a9a8eb9474b1d72b74843c7ef2699e6f61afd48c,PodSandboxId:ce52f165118d0afe9e96c805d3dd46689c87d4b5ae0e6b9d21d876aeb27227dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207535581376568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jsvdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da741136-c1ce-436f-9df0-e447b067265f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc0a601e7e63462ae468d844a5d9ab5ab2503cca90d8aa79e405980da5ab00c4,PodSandboxId:a67c3d12af90eab2d0762d56831c81890781d27639cbb0f17aed5461e4a1bc4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207535547918715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgfvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f63b4a01201f91ef68436ef7b9f220ee8a42a6db590a22d0094a0d0141ca5742,PodSandboxId:ecb272b5bfcdb94d8c8fc935b46a4ed895e341a3f49580cc5abffe1d36e10246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727207534928446558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h42s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76930a49-6a8a-4d02-84b8-8e26f3196ac3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca8e7b367bbab2a86cc8801e29b23fa44bc694d1f19f793edd0abd324d5d8a2,PodSandboxId:db53007d93ee51f1c6c16ac1b340b88ca541318a788211d0d77906a9dfa6a381,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727207524368871412,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad14993d013043ea3331f97542f9eccd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e336118db96c0e647fbb43585d09d58a5059c71eaa9fa6344c8f8cd15176d0,PodSandboxId:3ac76716e50414ae5c84d2ade366173eacd92c085baf40db1a18823eab3df0e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727207524345499296,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0c4dbda08a77c349b709614a89de24,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87a2e960ce814d9d4dfadd3629b9def9a617c612e66946b1b52b7cbe89a75fd,PodSandboxId:bb4de848e5140aaa278941b7d932f4dfae34911454b8fa1384e4eac2510a5071,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727207524299690841,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da5f5d7202ef1fb79e94fc63f1e707e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf5bb4c542ce6d3d86717ddbba9c22cb79ced7647cb77eb559bbc0b8c1584c9,PodSandboxId:3274ea157c618d773cbe8a7578b5ca10beb0adabc1b8954a8650a3bb902234ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207524269116162,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491c336d31724d98308cdabbc6d0100e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13b5e782473d933aff8bdaff83c0fd8fb2b7ba5f825b711df5c127957017d5d,PodSandboxId:1d7f2587996ec807b1c9a448669e61278cd671e026a8e19e571b842cd11e8a3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727207242298281063,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491c336d31724d98308cdabbc6d0100e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26abe4d7-111a-436a-9942-5ae5a662a650 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.890144126Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c69c3c4b-cd68-4889-8187-24e8eec2186c name=/runtime.v1.RuntimeService/Version
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.890229035Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c69c3c4b-cd68-4889-8187-24e8eec2186c name=/runtime.v1.RuntimeService/Version
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.891231985Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=589556c7-2f26-4aaf-be01-46ffd856a68a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.891644180Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208444891621526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=589556c7-2f26-4aaf-be01-46ffd856a68a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.892297945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9679cbe-50d8-4e66-b11a-f2200e55d891 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.892393242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9679cbe-50d8-4e66-b11a-f2200e55d891 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.892601045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34839ea54a6890ea675261ba9af3170d6b99038780d665abd04b35d45bb48f6f,PodSandboxId:131d8a27413b9da4c76720aed269dc33dcfe9410d87c9d9d4bf2bb4c6e50cc00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207536445414910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 766bdfe2-684a-47de-94fd-088795b60e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc98dcca72ffe9c57e5273f7a9a8eb9474b1d72b74843c7ef2699e6f61afd48c,PodSandboxId:ce52f165118d0afe9e96c805d3dd46689c87d4b5ae0e6b9d21d876aeb27227dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207535581376568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jsvdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da741136-c1ce-436f-9df0-e447b067265f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc0a601e7e63462ae468d844a5d9ab5ab2503cca90d8aa79e405980da5ab00c4,PodSandboxId:a67c3d12af90eab2d0762d56831c81890781d27639cbb0f17aed5461e4a1bc4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207535547918715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgfvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f63b4a01201f91ef68436ef7b9f220ee8a42a6db590a22d0094a0d0141ca5742,PodSandboxId:ecb272b5bfcdb94d8c8fc935b46a4ed895e341a3f49580cc5abffe1d36e10246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727207534928446558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h42s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76930a49-6a8a-4d02-84b8-8e26f3196ac3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca8e7b367bbab2a86cc8801e29b23fa44bc694d1f19f793edd0abd324d5d8a2,PodSandboxId:db53007d93ee51f1c6c16ac1b340b88ca541318a788211d0d77906a9dfa6a381,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727207524368871412,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad14993d013043ea3331f97542f9eccd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e336118db96c0e647fbb43585d09d58a5059c71eaa9fa6344c8f8cd15176d0,PodSandboxId:3ac76716e50414ae5c84d2ade366173eacd92c085baf40db1a18823eab3df0e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727207524345499296,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0c4dbda08a77c349b709614a89de24,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87a2e960ce814d9d4dfadd3629b9def9a617c612e66946b1b52b7cbe89a75fd,PodSandboxId:bb4de848e5140aaa278941b7d932f4dfae34911454b8fa1384e4eac2510a5071,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727207524299690841,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da5f5d7202ef1fb79e94fc63f1e707e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf5bb4c542ce6d3d86717ddbba9c22cb79ced7647cb77eb559bbc0b8c1584c9,PodSandboxId:3274ea157c618d773cbe8a7578b5ca10beb0adabc1b8954a8650a3bb902234ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207524269116162,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491c336d31724d98308cdabbc6d0100e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13b5e782473d933aff8bdaff83c0fd8fb2b7ba5f825b711df5c127957017d5d,PodSandboxId:1d7f2587996ec807b1c9a448669e61278cd671e026a8e19e571b842cd11e8a3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727207242298281063,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491c336d31724d98308cdabbc6d0100e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9679cbe-50d8-4e66-b11a-f2200e55d891 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.936898605Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a62f9ce-5662-44f5-85d8-a42907160fbe name=/runtime.v1.RuntimeService/Version
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.937011890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a62f9ce-5662-44f5-85d8-a42907160fbe name=/runtime.v1.RuntimeService/Version
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.938067246Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1500f830-bfdb-4cb3-8682-72fdf398221d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.938453759Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208444938433666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1500f830-bfdb-4cb3-8682-72fdf398221d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.939044320Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1c36753-e8d1-4a0f-b280-c03585d200f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.939111863Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1c36753-e8d1-4a0f-b280-c03585d200f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.939305943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34839ea54a6890ea675261ba9af3170d6b99038780d665abd04b35d45bb48f6f,PodSandboxId:131d8a27413b9da4c76720aed269dc33dcfe9410d87c9d9d4bf2bb4c6e50cc00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207536445414910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 766bdfe2-684a-47de-94fd-088795b60e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc98dcca72ffe9c57e5273f7a9a8eb9474b1d72b74843c7ef2699e6f61afd48c,PodSandboxId:ce52f165118d0afe9e96c805d3dd46689c87d4b5ae0e6b9d21d876aeb27227dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207535581376568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jsvdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da741136-c1ce-436f-9df0-e447b067265f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc0a601e7e63462ae468d844a5d9ab5ab2503cca90d8aa79e405980da5ab00c4,PodSandboxId:a67c3d12af90eab2d0762d56831c81890781d27639cbb0f17aed5461e4a1bc4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207535547918715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgfvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f63b4a01201f91ef68436ef7b9f220ee8a42a6db590a22d0094a0d0141ca5742,PodSandboxId:ecb272b5bfcdb94d8c8fc935b46a4ed895e341a3f49580cc5abffe1d36e10246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727207534928446558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h42s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76930a49-6a8a-4d02-84b8-8e26f3196ac3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca8e7b367bbab2a86cc8801e29b23fa44bc694d1f19f793edd0abd324d5d8a2,PodSandboxId:db53007d93ee51f1c6c16ac1b340b88ca541318a788211d0d77906a9dfa6a381,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727207524368871412,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad14993d013043ea3331f97542f9eccd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e336118db96c0e647fbb43585d09d58a5059c71eaa9fa6344c8f8cd15176d0,PodSandboxId:3ac76716e50414ae5c84d2ade366173eacd92c085baf40db1a18823eab3df0e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727207524345499296,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0c4dbda08a77c349b709614a89de24,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87a2e960ce814d9d4dfadd3629b9def9a617c612e66946b1b52b7cbe89a75fd,PodSandboxId:bb4de848e5140aaa278941b7d932f4dfae34911454b8fa1384e4eac2510a5071,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727207524299690841,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da5f5d7202ef1fb79e94fc63f1e707e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf5bb4c542ce6d3d86717ddbba9c22cb79ced7647cb77eb559bbc0b8c1584c9,PodSandboxId:3274ea157c618d773cbe8a7578b5ca10beb0adabc1b8954a8650a3bb902234ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207524269116162,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491c336d31724d98308cdabbc6d0100e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13b5e782473d933aff8bdaff83c0fd8fb2b7ba5f825b711df5c127957017d5d,PodSandboxId:1d7f2587996ec807b1c9a448669e61278cd671e026a8e19e571b842cd11e8a3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727207242298281063,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491c336d31724d98308cdabbc6d0100e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1c36753-e8d1-4a0f-b280-c03585d200f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.978326107Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6acc8852-14b9-4e33-a629-d9a7c7e3521f name=/runtime.v1.RuntimeService/Version
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.978430700Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6acc8852-14b9-4e33-a629-d9a7c7e3521f name=/runtime.v1.RuntimeService/Version
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.979507508Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f797f82-9a50-4214-84bf-fabd41ff275b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.979897289Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208444979876856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f797f82-9a50-4214-84bf-fabd41ff275b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.980542574Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b8e381e-5bed-4921-8668-c1fb524d0e7d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.980614614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b8e381e-5bed-4921-8668-c1fb524d0e7d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:07:24 embed-certs-311319 crio[702]: time="2024-09-24 20:07:24.980820943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34839ea54a6890ea675261ba9af3170d6b99038780d665abd04b35d45bb48f6f,PodSandboxId:131d8a27413b9da4c76720aed269dc33dcfe9410d87c9d9d4bf2bb4c6e50cc00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727207536445414910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 766bdfe2-684a-47de-94fd-088795b60e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc98dcca72ffe9c57e5273f7a9a8eb9474b1d72b74843c7ef2699e6f61afd48c,PodSandboxId:ce52f165118d0afe9e96c805d3dd46689c87d4b5ae0e6b9d21d876aeb27227dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207535581376568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jsvdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da741136-c1ce-436f-9df0-e447b067265f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc0a601e7e63462ae468d844a5d9ab5ab2503cca90d8aa79e405980da5ab00c4,PodSandboxId:a67c3d12af90eab2d0762d56831c81890781d27639cbb0f17aed5461e4a1bc4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727207535547918715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qgfvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f63b4a01201f91ef68436ef7b9f220ee8a42a6db590a22d0094a0d0141ca5742,PodSandboxId:ecb272b5bfcdb94d8c8fc935b46a4ed895e341a3f49580cc5abffe1d36e10246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727207534928446558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h42s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76930a49-6a8a-4d02-84b8-8e26f3196ac3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca8e7b367bbab2a86cc8801e29b23fa44bc694d1f19f793edd0abd324d5d8a2,PodSandboxId:db53007d93ee51f1c6c16ac1b340b88ca541318a788211d0d77906a9dfa6a381,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727207524368871412,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad14993d013043ea3331f97542f9eccd,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e336118db96c0e647fbb43585d09d58a5059c71eaa9fa6344c8f8cd15176d0,PodSandboxId:3ac76716e50414ae5c84d2ade366173eacd92c085baf40db1a18823eab3df0e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727207524345499296,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0c4dbda08a77c349b709614a89de24,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87a2e960ce814d9d4dfadd3629b9def9a617c612e66946b1b52b7cbe89a75fd,PodSandboxId:bb4de848e5140aaa278941b7d932f4dfae34911454b8fa1384e4eac2510a5071,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727207524299690841,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da5f5d7202ef1fb79e94fc63f1e707e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf5bb4c542ce6d3d86717ddbba9c22cb79ced7647cb77eb559bbc0b8c1584c9,PodSandboxId:3274ea157c618d773cbe8a7578b5ca10beb0adabc1b8954a8650a3bb902234ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727207524269116162,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491c336d31724d98308cdabbc6d0100e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13b5e782473d933aff8bdaff83c0fd8fb2b7ba5f825b711df5c127957017d5d,PodSandboxId:1d7f2587996ec807b1c9a448669e61278cd671e026a8e19e571b842cd11e8a3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727207242298281063,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-311319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491c336d31724d98308cdabbc6d0100e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b8e381e-5bed-4921-8668-c1fb524d0e7d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	34839ea54a689       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   131d8a27413b9       storage-provisioner
	cc98dcca72ffe       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   ce52f165118d0       coredns-7c65d6cfc9-jsvdk
	dc0a601e7e634       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   a67c3d12af90e       coredns-7c65d6cfc9-qgfvt
	f63b4a01201f9       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   15 minutes ago      Running             kube-proxy                0                   ecb272b5bfcdb       kube-proxy-h42s7
	fca8e7b367bba       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   15 minutes ago      Running             kube-scheduler            2                   db53007d93ee5       kube-scheduler-embed-certs-311319
	c9e336118db96       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   3ac76716e5041       etcd-embed-certs-311319
	d87a2e960ce81       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   15 minutes ago      Running             kube-controller-manager   2                   bb4de848e5140       kube-controller-manager-embed-certs-311319
	ddf5bb4c542ce       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   15 minutes ago      Running             kube-apiserver            2                   3274ea157c618       kube-apiserver-embed-certs-311319
	d13b5e782473d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   20 minutes ago      Exited              kube-apiserver            1                   1d7f2587996ec       kube-apiserver-embed-certs-311319
	
	
	==> coredns [cc98dcca72ffe9c57e5273f7a9a8eb9474b1d72b74843c7ef2699e6f61afd48c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [dc0a601e7e63462ae468d844a5d9ab5ab2503cca90d8aa79e405980da5ab00c4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-311319
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-311319
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=embed-certs-311319
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T19_52_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 19:52:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-311319
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 20:07:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 20:02:32 +0000   Tue, 24 Sep 2024 19:52:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 20:02:32 +0000   Tue, 24 Sep 2024 19:52:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 20:02:32 +0000   Tue, 24 Sep 2024 19:52:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 20:02:32 +0000   Tue, 24 Sep 2024 19:52:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.21
	  Hostname:    embed-certs-311319
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5aa1f90b84574d049fd1d5b4831e8f5a
	  System UUID:                5aa1f90b-8457-4d04-9fd1-d5b4831e8f5a
	  Boot ID:                    2a938032-7c38-4598-a997-31f6fe2d9d55
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-jsvdk                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-qgfvt                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-embed-certs-311319                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-embed-certs-311319             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-embed-certs-311319    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-h42s7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-embed-certs-311319             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-xnwm4               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-311319 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-311319 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-311319 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-311319 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-311319 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-311319 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-311319 event: Registered Node embed-certs-311319 in Controller
	
	
	==> dmesg <==
	[  +0.048034] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040696] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep24 19:47] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.800042] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.543751] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.370586] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.063360] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050821] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.173868] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.121991] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.266232] systemd-fstab-generator[693]: Ignoring "noauto" option for root device
	[  +3.732987] systemd-fstab-generator[783]: Ignoring "noauto" option for root device
	[  +1.731156] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.062452] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.492552] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.804494] kauditd_printk_skb: 85 callbacks suppressed
	[Sep24 19:52] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.601286] systemd-fstab-generator[2557]: Ignoring "noauto" option for root device
	[  +4.402424] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.137894] systemd-fstab-generator[2876]: Ignoring "noauto" option for root device
	[  +5.849886] systemd-fstab-generator[3012]: Ignoring "noauto" option for root device
	[  +0.106281] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.912819] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [c9e336118db96c0e647fbb43585d09d58a5059c71eaa9fa6344c8f8cd15176d0] <==
	{"level":"info","ts":"2024-09-24T19:52:05.445435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9d71b865fa366d6 received MsgPreVoteResp from f9d71b865fa366d6 at term 1"}
	{"level":"info","ts":"2024-09-24T19:52:05.445473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9d71b865fa366d6 became candidate at term 2"}
	{"level":"info","ts":"2024-09-24T19:52:05.445497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9d71b865fa366d6 received MsgVoteResp from f9d71b865fa366d6 at term 2"}
	{"level":"info","ts":"2024-09-24T19:52:05.445529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9d71b865fa366d6 became leader at term 2"}
	{"level":"info","ts":"2024-09-24T19:52:05.445555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f9d71b865fa366d6 elected leader f9d71b865fa366d6 at term 2"}
	{"level":"info","ts":"2024-09-24T19:52:05.447162Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:52:05.448031Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f9d71b865fa366d6","local-member-attributes":"{Name:embed-certs-311319 ClientURLs:[https://192.168.61.21:2379]}","request-path":"/0/members/f9d71b865fa366d6/attributes","cluster-id":"7e645ebb8a3ca2e3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T19:52:05.448226Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:52:05.448519Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:52:05.448733Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7e645ebb8a3ca2e3","local-member-id":"f9d71b865fa366d6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:52:05.448788Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T19:52:05.448828Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T19:52:05.448888Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:52:05.448932Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:52:05.449477Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:52:05.449587Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T19:52:05.450244Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T19:52:05.450393Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.21:2379"}
	{"level":"info","ts":"2024-09-24T20:02:05.474887Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":721}
	{"level":"info","ts":"2024-09-24T20:02:05.483481Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":721,"took":"8.269761ms","hash":2320678004,"current-db-size-bytes":2265088,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2265088,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-09-24T20:02:05.483528Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2320678004,"revision":721,"compact-revision":-1}
	{"level":"info","ts":"2024-09-24T20:07:05.481446Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":964}
	{"level":"info","ts":"2024-09-24T20:07:05.484740Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":964,"took":"3.049705ms","hash":3187878393,"current-db-size-bytes":2265088,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1650688,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-24T20:07:05.484788Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3187878393,"revision":964,"compact-revision":721}
	{"level":"info","ts":"2024-09-24T20:07:12.831041Z","caller":"traceutil/trace.go:171","msg":"trace[1656720906] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"118.511724ms","start":"2024-09-24T20:07:12.712293Z","end":"2024-09-24T20:07:12.830805Z","steps":["trace[1656720906] 'process raft request'  (duration: 118.384525ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:07:25 up 20 min,  0 users,  load average: 0.55, 0.16, 0.09
	Linux embed-certs-311319 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d13b5e782473d933aff8bdaff83c0fd8fb2b7ba5f825b711df5c127957017d5d] <==
	W0924 19:51:58.204074       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:58.235203       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:58.278205       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:58.350420       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:58.402514       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:58.475449       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:58.480116       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:58.519791       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:51:58.543681       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:00.704228       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:00.776409       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:00.953055       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:00.957400       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.123608       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.240448       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.265223       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.330030       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.338604       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.410880       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.461854       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.468512       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.513882       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.563265       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.584777       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 19:52:01.642239       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ddf5bb4c542ce6d3d86717ddbba9c22cb79ced7647cb77eb559bbc0b8c1584c9] <==
	 > logger="UnhandledError"
	I0924 20:03:07.711690       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 20:05:07.711226       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 20:05:07.711375       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0924 20:05:07.712402       1 handler_proxy.go:99] no RequestInfo found in the context
	I0924 20:05:07.712502       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0924 20:05:07.712503       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 20:05:07.713961       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 20:07:06.711674       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 20:07:06.711759       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0924 20:07:07.713997       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 20:07:07.714065       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0924 20:07:07.714105       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 20:07:07.714152       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 20:07:07.715220       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 20:07:07.715278       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d87a2e960ce814d9d4dfadd3629b9def9a617c612e66946b1b52b7cbe89a75fd] <==
	E0924 20:02:13.653174       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:02:14.099070       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 20:02:32.188521       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-311319"
	E0924 20:02:43.658126       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:02:44.105396       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:03:13.665016       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:03:14.113479       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 20:03:19.165029       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="353.73µs"
	I0924 20:03:30.164398       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="53.139µs"
	E0924 20:03:43.670132       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:03:44.120766       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:04:13.675639       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:04:14.127709       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:04:43.682089       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:04:44.134030       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:05:13.687878       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:05:14.140417       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:05:43.693590       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:05:44.149292       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:06:13.699283       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:06:14.156320       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:06:43.707057       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:06:44.164384       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 20:07:13.714577       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 20:07:14.172643       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f63b4a01201f91ef68436ef7b9f220ee8a42a6db590a22d0094a0d0141ca5742] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 19:52:15.344510       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 19:52:15.367569       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.21"]
	E0924 19:52:15.367618       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 19:52:15.586965       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 19:52:15.587001       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 19:52:15.587025       1 server_linux.go:169] "Using iptables Proxier"
	I0924 19:52:15.594145       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 19:52:15.594380       1 server.go:483] "Version info" version="v1.31.1"
	I0924 19:52:15.594391       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:52:15.595730       1 config.go:199] "Starting service config controller"
	I0924 19:52:15.595753       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 19:52:15.595771       1 config.go:105] "Starting endpoint slice config controller"
	I0924 19:52:15.595775       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 19:52:15.599634       1 config.go:328] "Starting node config controller"
	I0924 19:52:15.599644       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 19:52:15.696807       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 19:52:15.696849       1 shared_informer.go:320] Caches are synced for service config
	I0924 19:52:15.699839       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [fca8e7b367bbab2a86cc8801e29b23fa44bc694d1f19f793edd0abd324d5d8a2] <==
	W0924 19:52:06.716855       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 19:52:06.716934       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 19:52:07.538634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 19:52:07.538683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.583533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0924 19:52:07.583638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.668732       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 19:52:07.668902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.733245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 19:52:07.733292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.734664       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 19:52:07.734812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.744995       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 19:52:07.745077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.892355       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 19:52:07.892505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.908388       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 19:52:07.908500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.920918       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 19:52:07.921014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:07.934878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 19:52:07.934938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 19:52:08.278578       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 19:52:08.278638       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0924 19:52:10.009494       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 20:06:18 embed-certs-311319 kubelet[2883]: E0924 20:06:18.150513    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xnwm4" podUID="dc64f26b-e4a6-4692-83d5-e6c794c1b130"
	Sep 24 20:06:19 embed-certs-311319 kubelet[2883]: E0924 20:06:19.334061    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208379333727197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:19 embed-certs-311319 kubelet[2883]: E0924 20:06:19.334087    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208379333727197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:29 embed-certs-311319 kubelet[2883]: E0924 20:06:29.150316    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xnwm4" podUID="dc64f26b-e4a6-4692-83d5-e6c794c1b130"
	Sep 24 20:06:29 embed-certs-311319 kubelet[2883]: E0924 20:06:29.335663    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208389335299989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:29 embed-certs-311319 kubelet[2883]: E0924 20:06:29.335714    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208389335299989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:39 embed-certs-311319 kubelet[2883]: E0924 20:06:39.337862    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208399337311801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:39 embed-certs-311319 kubelet[2883]: E0924 20:06:39.338438    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208399337311801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:43 embed-certs-311319 kubelet[2883]: E0924 20:06:43.150018    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xnwm4" podUID="dc64f26b-e4a6-4692-83d5-e6c794c1b130"
	Sep 24 20:06:49 embed-certs-311319 kubelet[2883]: E0924 20:06:49.341136    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208409340686255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:49 embed-certs-311319 kubelet[2883]: E0924 20:06:49.341437    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208409340686255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:54 embed-certs-311319 kubelet[2883]: E0924 20:06:54.149665    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xnwm4" podUID="dc64f26b-e4a6-4692-83d5-e6c794c1b130"
	Sep 24 20:06:59 embed-certs-311319 kubelet[2883]: E0924 20:06:59.342923    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208419342635498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:06:59 embed-certs-311319 kubelet[2883]: E0924 20:06:59.343267    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208419342635498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:07 embed-certs-311319 kubelet[2883]: E0924 20:07:07.151333    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xnwm4" podUID="dc64f26b-e4a6-4692-83d5-e6c794c1b130"
	Sep 24 20:07:09 embed-certs-311319 kubelet[2883]: E0924 20:07:09.162568    2883 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 20:07:09 embed-certs-311319 kubelet[2883]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 20:07:09 embed-certs-311319 kubelet[2883]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 20:07:09 embed-certs-311319 kubelet[2883]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 20:07:09 embed-certs-311319 kubelet[2883]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 20:07:09 embed-certs-311319 kubelet[2883]: E0924 20:07:09.344513    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208429344293171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:09 embed-certs-311319 kubelet[2883]: E0924 20:07:09.344548    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208429344293171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:19 embed-certs-311319 kubelet[2883]: E0924 20:07:19.151047    2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xnwm4" podUID="dc64f26b-e4a6-4692-83d5-e6c794c1b130"
	Sep 24 20:07:19 embed-certs-311319 kubelet[2883]: E0924 20:07:19.346463    2883 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208439346088451,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 20:07:19 embed-certs-311319 kubelet[2883]: E0924 20:07:19.346563    2883 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208439346088451,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [34839ea54a6890ea675261ba9af3170d6b99038780d665abd04b35d45bb48f6f] <==
	I0924 19:52:16.530927       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 19:52:16.553345       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 19:52:16.553516       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 19:52:16.566116       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 19:52:16.566322       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-311319_a57cb52b-a249-4ba9-8011-a2b657fd0034!
	I0924 19:52:16.567099       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"28f73b8c-db42-48eb-ba7a-97825a01b844", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-311319_a57cb52b-a249-4ba9-8011-a2b657fd0034 became leader
	I0924 19:52:16.667287       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-311319_a57cb52b-a249-4ba9-8011-a2b657fd0034!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-311319 -n embed-certs-311319
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-311319 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-xnwm4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-311319 describe pod metrics-server-6867b74b74-xnwm4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-311319 describe pod metrics-server-6867b74b74-xnwm4: exit status 1 (60.449149ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-xnwm4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-311319 describe pod metrics-server-6867b74b74-xnwm4: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (359.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (147.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 20:04:38.249412   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 20:04:49.790653   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 20:05:13.536073   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
E0924 20:06:08.228704   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.81:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510301 -n old-k8s-version-510301
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510301 -n old-k8s-version-510301: exit status 2 (220.80613ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-510301" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-510301 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-510301 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.541µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-510301 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510301 -n old-k8s-version-510301
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510301 -n old-k8s-version-510301: exit status 2 (220.086489ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-510301 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-510301 logs -n 25: (1.50832771s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-038637 sudo cat                              | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:37 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo                                  | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:37 UTC | 24 Sep 24 19:38 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo find                             | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-038637 sudo crio                             | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-038637                                       | calico-038637                | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-119609 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | disable-driver-mounts-119609                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:39 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-311319            | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC | 24 Sep 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-965745             | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-093771  | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC | 24 Sep 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:39 UTC |                     |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-510301        | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-311319                 | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-311319                                  | embed-certs-311319           | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-965745                  | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-965745                                   | no-preload-965745            | jenkins | v1.34.0 | 24 Sep 24 19:41 UTC | 24 Sep 24 19:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-093771       | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-093771 | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:51 UTC |
	|         | default-k8s-diff-port-093771                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-510301             | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC | 24 Sep 24 19:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-510301                              | old-k8s-version-510301       | jenkins | v1.34.0 | 24 Sep 24 19:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 19:42:46
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 19:42:46.491955   70152 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:42:46.492212   70152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:42:46.492222   70152 out.go:358] Setting ErrFile to fd 2...
	I0924 19:42:46.492228   70152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:42:46.492386   70152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:42:46.492893   70152 out.go:352] Setting JSON to false
	I0924 19:42:46.493799   70152 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5117,"bootTime":1727201849,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 19:42:46.493899   70152 start.go:139] virtualization: kvm guest
	I0924 19:42:46.496073   70152 out.go:177] * [old-k8s-version-510301] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 19:42:46.497447   70152 notify.go:220] Checking for updates...
	I0924 19:42:46.497466   70152 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 19:42:46.498899   70152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 19:42:46.500315   70152 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:42:46.502038   70152 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:42:46.503591   70152 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 19:42:46.505010   70152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 19:42:46.506789   70152 config.go:182] Loaded profile config "old-k8s-version-510301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:42:46.507239   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:42:46.507282   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:42:46.522338   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43977
	I0924 19:42:46.522810   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:42:46.523430   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:42:46.523450   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:42:46.523809   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:42:46.523989   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:42:46.525830   70152 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 19:42:46.527032   70152 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 19:42:46.527327   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:42:46.527361   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:42:46.542427   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37825
	I0924 19:42:46.542782   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:42:46.543220   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:42:46.543237   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:42:46.543562   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:42:46.543731   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:42:46.577253   70152 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 19:42:46.578471   70152 start.go:297] selected driver: kvm2
	I0924 19:42:46.578486   70152 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:42:46.578620   70152 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 19:42:46.579480   70152 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:42:46.579576   70152 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 19:42:46.595023   70152 install.go:137] /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 19:42:46.595376   70152 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:42:46.595401   70152 cni.go:84] Creating CNI manager for ""
	I0924 19:42:46.595427   70152 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:42:46.595456   70152 start.go:340] cluster config:
	{Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:42:46.595544   70152 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 19:42:46.597600   70152 out.go:177] * Starting "old-k8s-version-510301" primary control-plane node in "old-k8s-version-510301" cluster
	I0924 19:42:49.587099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:42:46.599107   70152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:42:46.599145   70152 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 19:42:46.599157   70152 cache.go:56] Caching tarball of preloaded images
	I0924 19:42:46.599232   70152 preload.go:172] Found /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 19:42:46.599246   70152 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 19:42:46.599368   70152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json ...
	I0924 19:42:46.599577   70152 start.go:360] acquireMachinesLock for old-k8s-version-510301: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:42:52.659112   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:42:58.739082   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:01.811107   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:07.891031   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:10.963093   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:17.043125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:20.115055   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:26.195121   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:29.267111   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:35.347125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:38.419109   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:44.499098   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:47.571040   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:53.651128   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:43:56.723110   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:02.803080   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:05.875118   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:11.955117   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:15.027102   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:21.107097   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:24.179122   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:30.259099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:33.331130   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:39.411086   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:42.483063   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:48.563071   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:51.635087   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:44:57.715125   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:00.787050   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:06.867122   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:09.939097   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:16.019098   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:19.091109   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:25.171099   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:28.243075   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:34.323040   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:37.395180   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:43.475096   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:46.547060   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:52.627035   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:55.699131   69408 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.21:22: connect: no route to host
	I0924 19:45:58.703628   69576 start.go:364] duration metric: took 4m21.10107111s to acquireMachinesLock for "no-preload-965745"
	I0924 19:45:58.703677   69576 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:45:58.703682   69576 fix.go:54] fixHost starting: 
	I0924 19:45:58.704078   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:45:58.704123   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:45:58.719888   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32803
	I0924 19:45:58.720250   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:45:58.720694   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:45:58.720714   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:45:58.721073   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:45:58.721262   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:45:58.721419   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:45:58.723062   69576 fix.go:112] recreateIfNeeded on no-preload-965745: state=Stopped err=<nil>
	I0924 19:45:58.723086   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	W0924 19:45:58.723253   69576 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:45:58.725047   69576 out.go:177] * Restarting existing kvm2 VM for "no-preload-965745" ...
	I0924 19:45:58.701057   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:45:58.701123   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:45:58.701448   69408 buildroot.go:166] provisioning hostname "embed-certs-311319"
	I0924 19:45:58.701474   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:45:58.701688   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:45:58.703495   69408 machine.go:96] duration metric: took 4m37.423499364s to provisionDockerMachine
	I0924 19:45:58.703530   69408 fix.go:56] duration metric: took 4m37.446368089s for fixHost
	I0924 19:45:58.703536   69408 start.go:83] releasing machines lock for "embed-certs-311319", held for 4m37.446384972s
	W0924 19:45:58.703575   69408 start.go:714] error starting host: provision: host is not running
	W0924 19:45:58.703648   69408 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0924 19:45:58.703659   69408 start.go:729] Will try again in 5 seconds ...
	I0924 19:45:58.726232   69576 main.go:141] libmachine: (no-preload-965745) Calling .Start
	I0924 19:45:58.726397   69576 main.go:141] libmachine: (no-preload-965745) Ensuring networks are active...
	I0924 19:45:58.727100   69576 main.go:141] libmachine: (no-preload-965745) Ensuring network default is active
	I0924 19:45:58.727392   69576 main.go:141] libmachine: (no-preload-965745) Ensuring network mk-no-preload-965745 is active
	I0924 19:45:58.727758   69576 main.go:141] libmachine: (no-preload-965745) Getting domain xml...
	I0924 19:45:58.728339   69576 main.go:141] libmachine: (no-preload-965745) Creating domain...
	I0924 19:45:59.928391   69576 main.go:141] libmachine: (no-preload-965745) Waiting to get IP...
	I0924 19:45:59.929441   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:45:59.929931   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:45:59.929982   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:45:59.929905   70821 retry.go:31] will retry after 231.188723ms: waiting for machine to come up
	I0924 19:46:00.162502   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.162993   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.163021   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.162944   70821 retry.go:31] will retry after 278.953753ms: waiting for machine to come up
	I0924 19:46:00.443443   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.443868   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.443895   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.443830   70821 retry.go:31] will retry after 307.192984ms: waiting for machine to come up
	I0924 19:46:00.752227   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:00.752637   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:00.752666   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:00.752602   70821 retry.go:31] will retry after 596.967087ms: waiting for machine to come up
	I0924 19:46:01.351461   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:01.351906   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:01.351933   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:01.351859   70821 retry.go:31] will retry after 579.94365ms: waiting for machine to come up
	I0924 19:46:01.933682   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:01.934110   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:01.934141   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:01.934070   70821 retry.go:31] will retry after 862.980289ms: waiting for machine to come up
	I0924 19:46:03.705206   69408 start.go:360] acquireMachinesLock for embed-certs-311319: {Name:mk2c75caa9e95878010bc0bf0b82c06d2b0740a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 19:46:02.799129   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:02.799442   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:02.799471   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:02.799394   70821 retry.go:31] will retry after 992.898394ms: waiting for machine to come up
	I0924 19:46:03.794034   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:03.794462   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:03.794518   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:03.794440   70821 retry.go:31] will retry after 917.82796ms: waiting for machine to come up
	I0924 19:46:04.713515   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:04.713888   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:04.713911   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:04.713861   70821 retry.go:31] will retry after 1.30142733s: waiting for machine to come up
	I0924 19:46:06.017327   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:06.017868   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:06.017891   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:06.017835   70821 retry.go:31] will retry after 1.585023602s: waiting for machine to come up
	I0924 19:46:07.603787   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:07.604129   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:07.604148   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:07.604108   70821 retry.go:31] will retry after 2.382871382s: waiting for machine to come up
	I0924 19:46:09.989065   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:09.989530   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:09.989592   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:09.989504   70821 retry.go:31] will retry after 3.009655055s: waiting for machine to come up
	I0924 19:46:17.011094   69904 start.go:364] duration metric: took 3m57.677491969s to acquireMachinesLock for "default-k8s-diff-port-093771"
	I0924 19:46:17.011169   69904 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:17.011180   69904 fix.go:54] fixHost starting: 
	I0924 19:46:17.011578   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:17.011648   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:17.030756   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I0924 19:46:17.031186   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:17.031698   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:46:17.031722   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:17.032028   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:17.032198   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:17.032340   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:46:17.033737   69904 fix.go:112] recreateIfNeeded on default-k8s-diff-port-093771: state=Stopped err=<nil>
	I0924 19:46:17.033761   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	W0924 19:46:17.033912   69904 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:17.036154   69904 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-093771" ...
	I0924 19:46:13.001046   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:13.001487   69576 main.go:141] libmachine: (no-preload-965745) DBG | unable to find current IP address of domain no-preload-965745 in network mk-no-preload-965745
	I0924 19:46:13.001518   69576 main.go:141] libmachine: (no-preload-965745) DBG | I0924 19:46:13.001448   70821 retry.go:31] will retry after 2.789870388s: waiting for machine to come up
	I0924 19:46:15.792496   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.793014   69576 main.go:141] libmachine: (no-preload-965745) Found IP for machine: 192.168.39.134
	I0924 19:46:15.793035   69576 main.go:141] libmachine: (no-preload-965745) Reserving static IP address...
	I0924 19:46:15.793051   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has current primary IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.793564   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "no-preload-965745", mac: "52:54:00:c4:4b:79", ip: "192.168.39.134"} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.793590   69576 main.go:141] libmachine: (no-preload-965745) DBG | skip adding static IP to network mk-no-preload-965745 - found existing host DHCP lease matching {name: "no-preload-965745", mac: "52:54:00:c4:4b:79", ip: "192.168.39.134"}
	I0924 19:46:15.793602   69576 main.go:141] libmachine: (no-preload-965745) Reserved static IP address: 192.168.39.134
	I0924 19:46:15.793631   69576 main.go:141] libmachine: (no-preload-965745) DBG | Getting to WaitForSSH function...
	I0924 19:46:15.793643   69576 main.go:141] libmachine: (no-preload-965745) Waiting for SSH to be available...
	I0924 19:46:15.795732   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.796002   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.796023   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.796169   69576 main.go:141] libmachine: (no-preload-965745) DBG | Using SSH client type: external
	I0924 19:46:15.796196   69576 main.go:141] libmachine: (no-preload-965745) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa (-rw-------)
	I0924 19:46:15.796227   69576 main.go:141] libmachine: (no-preload-965745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:15.796241   69576 main.go:141] libmachine: (no-preload-965745) DBG | About to run SSH command:
	I0924 19:46:15.796247   69576 main.go:141] libmachine: (no-preload-965745) DBG | exit 0
	I0924 19:46:15.922480   69576 main.go:141] libmachine: (no-preload-965745) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:15.922886   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetConfigRaw
	I0924 19:46:15.923532   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:15.925814   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.926152   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.926180   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.926341   69576 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/config.json ...
	I0924 19:46:15.926506   69576 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:15.926523   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:15.926755   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:15.929175   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.929512   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:15.929539   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:15.929647   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:15.929805   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:15.929956   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:15.930041   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:15.930184   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:15.930374   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:15.930386   69576 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:16.038990   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:16.039018   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.039241   69576 buildroot.go:166] provisioning hostname "no-preload-965745"
	I0924 19:46:16.039266   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.039459   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.042183   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.042567   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.042603   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.042728   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.042929   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.043085   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.043264   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.043431   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.043611   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.043624   69576 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-965745 && echo "no-preload-965745" | sudo tee /etc/hostname
	I0924 19:46:16.163262   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-965745
	
	I0924 19:46:16.163289   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.165935   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.166256   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.166276   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.166415   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.166602   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.166728   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.166876   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.167005   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.167219   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.167244   69576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-965745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-965745/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-965745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:16.282661   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:16.282689   69576 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:16.282714   69576 buildroot.go:174] setting up certificates
	I0924 19:46:16.282723   69576 provision.go:84] configureAuth start
	I0924 19:46:16.282734   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetMachineName
	I0924 19:46:16.283017   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:16.285665   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.286113   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.286140   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.286283   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.288440   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.288750   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.288775   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.288932   69576 provision.go:143] copyHostCerts
	I0924 19:46:16.288984   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:16.288996   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:16.289093   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:16.289206   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:16.289221   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:16.289265   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:16.289341   69576 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:16.289350   69576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:16.289385   69576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:16.289451   69576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.no-preload-965745 san=[127.0.0.1 192.168.39.134 localhost minikube no-preload-965745]
	I0924 19:46:16.400236   69576 provision.go:177] copyRemoteCerts
	I0924 19:46:16.400302   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:16.400330   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.402770   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.403069   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.403107   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.403226   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.403415   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.403678   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.403826   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:16.488224   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:16.509856   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 19:46:16.531212   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:16.552758   69576 provision.go:87] duration metric: took 270.023746ms to configureAuth
	I0924 19:46:16.552787   69576 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:16.552980   69576 config.go:182] Loaded profile config "no-preload-965745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:16.553045   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.555463   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.555792   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.555812   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.555992   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.556190   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.556337   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.556447   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.556569   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.556756   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.556774   69576 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:16.777283   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:16.777305   69576 machine.go:96] duration metric: took 850.787273ms to provisionDockerMachine
	I0924 19:46:16.777318   69576 start.go:293] postStartSetup for "no-preload-965745" (driver="kvm2")
	I0924 19:46:16.777330   69576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:16.777348   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:16.777726   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:16.777751   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.780187   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.780591   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.780632   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.780812   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.781015   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.781163   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.781359   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:16.864642   69576 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:16.868296   69576 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:16.868317   69576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:16.868379   69576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:16.868456   69576 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:16.868549   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:16.877019   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:16.898717   69576 start.go:296] duration metric: took 121.386885ms for postStartSetup
	I0924 19:46:16.898752   69576 fix.go:56] duration metric: took 18.195069583s for fixHost
	I0924 19:46:16.898772   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:16.901284   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.901593   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:16.901620   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:16.901773   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:16.901965   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.902143   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:16.902278   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:16.902416   69576 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:16.902572   69576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0924 19:46:16.902580   69576 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:17.010942   69576 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207176.987992125
	
	I0924 19:46:17.010968   69576 fix.go:216] guest clock: 1727207176.987992125
	I0924 19:46:17.010977   69576 fix.go:229] Guest: 2024-09-24 19:46:16.987992125 +0000 UTC Remote: 2024-09-24 19:46:16.898755451 +0000 UTC m=+279.432619611 (delta=89.236674ms)
	I0924 19:46:17.011002   69576 fix.go:200] guest clock delta is within tolerance: 89.236674ms
	I0924 19:46:17.011008   69576 start.go:83] releasing machines lock for "no-preload-965745", held for 18.307345605s
	I0924 19:46:17.011044   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.011314   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:17.014130   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.014475   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.014510   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.014661   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015160   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015331   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:17.015443   69576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:17.015485   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:17.015512   69576 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:17.015536   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:17.018062   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018324   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018392   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.018416   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018531   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:17.018681   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:17.018754   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:17.018805   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:17.018814   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:17.018956   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:17.019039   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:17.019130   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:17.019295   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:17.019483   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:17.120138   69576 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:17.125567   69576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:17.269403   69576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:17.275170   69576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:17.275229   69576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:17.290350   69576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:17.290374   69576 start.go:495] detecting cgroup driver to use...
	I0924 19:46:17.290437   69576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:17.310059   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:17.323377   69576 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:17.323440   69576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:17.336247   69576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:17.349168   69576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:17.461240   69576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:17.606562   69576 docker.go:233] disabling docker service ...
	I0924 19:46:17.606632   69576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:17.623001   69576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:17.637472   69576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:17.778735   69576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:17.905408   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:17.921465   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:17.938193   69576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:46:17.938265   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.947686   69576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:17.947748   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.957230   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.966507   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.975768   69576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:17.985288   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:17.995405   69576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:18.011401   69576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:18.024030   69576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:18.034873   69576 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:18.034939   69576 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:18.047359   69576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:18.057288   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:18.181067   69576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:18.272703   69576 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:18.272779   69576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:18.277272   69576 start.go:563] Will wait 60s for crictl version
	I0924 19:46:18.277338   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.280914   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:18.319509   69576 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:18.319603   69576 ssh_runner.go:195] Run: crio --version
	I0924 19:46:18.349619   69576 ssh_runner.go:195] Run: crio --version
	I0924 19:46:18.376567   69576 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:46:17.037598   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Start
	I0924 19:46:17.037763   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring networks are active...
	I0924 19:46:17.038517   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring network default is active
	I0924 19:46:17.038875   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Ensuring network mk-default-k8s-diff-port-093771 is active
	I0924 19:46:17.039247   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Getting domain xml...
	I0924 19:46:17.039971   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Creating domain...
	I0924 19:46:18.369133   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting to get IP...
	I0924 19:46:18.370069   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.370537   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.370589   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.370490   70958 retry.go:31] will retry after 309.496724ms: waiting for machine to come up
	I0924 19:46:18.682355   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.682933   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.682982   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.682901   70958 retry.go:31] will retry after 274.120659ms: waiting for machine to come up
	I0924 19:46:18.958554   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.959017   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:18.959044   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:18.958981   70958 retry.go:31] will retry after 301.44935ms: waiting for machine to come up
	I0924 19:46:18.377928   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetIP
	I0924 19:46:18.380767   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:18.381227   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:18.381343   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:18.381519   69576 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:18.385510   69576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:18.398125   69576 kubeadm.go:883] updating cluster {Name:no-preload-965745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:18.398269   69576 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:46:18.398324   69576 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:18.433136   69576 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:46:18.433158   69576 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 19:46:18.433221   69576 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:18.433232   69576 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.433266   69576 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.433288   69576 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.433295   69576 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.433348   69576 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.433369   69576 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0924 19:46:18.433406   69576 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.435096   69576 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.435095   69576 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.435130   69576 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0924 19:46:18.435125   69576 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.435167   69576 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.435282   69576 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.435312   69576 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:18.435355   69576 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.586269   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.594361   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.594399   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.595814   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.600629   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.625054   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.626264   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0924 19:46:18.648420   69576 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0924 19:46:18.648471   69576 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.648519   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.736906   69576 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0924 19:46:18.736967   69576 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.736995   69576 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0924 19:46:18.737033   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.737038   69576 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.736924   69576 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0924 19:46:18.737086   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.737094   69576 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.737129   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.738294   69576 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0924 19:46:18.738322   69576 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.738372   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.759842   69576 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0924 19:46:18.759877   69576 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.759920   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:18.863913   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.864011   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:18.863924   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.863940   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:18.863970   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:18.863980   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:18.982915   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:18.982954   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:18.983003   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:19.005899   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:19.005922   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:19.005993   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:19.085255   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 19:46:19.085357   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 19:46:19.085385   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 19:46:19.140884   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 19:46:19.140951   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 19:46:19.141049   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 19:46:19.186906   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0924 19:46:19.187032   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.190934   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0924 19:46:19.191034   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:19.219210   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0924 19:46:19.219345   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:19.250400   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0924 19:46:19.250433   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0924 19:46:19.250510   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:19.250510   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0924 19:46:19.250541   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0924 19:46:19.250557   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.250511   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:19.250575   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0924 19:46:19.250589   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:19.250595   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0924 19:46:19.250597   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 19:46:19.263357   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0924 19:46:19.422736   69576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:21.705978   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.455378333s)
	I0924 19:46:21.706013   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.455386133s)
	I0924 19:46:21.706050   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0924 19:46:21.706075   69576 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:21.706086   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.455478137s)
	I0924 19:46:21.706116   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0924 19:46:21.706023   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0924 19:46:21.706127   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0924 19:46:21.706162   69576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.283401294s)
	I0924 19:46:21.706195   69576 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0924 19:46:21.706223   69576 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:21.706267   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:46:19.262500   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.263016   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.263065   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:19.262997   70958 retry.go:31] will retry after 463.004617ms: waiting for machine to come up
	I0924 19:46:19.727528   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.728017   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:19.728039   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:19.727972   70958 retry.go:31] will retry after 463.942506ms: waiting for machine to come up
	I0924 19:46:20.193614   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.194039   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.194066   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:20.193993   70958 retry.go:31] will retry after 595.200456ms: waiting for machine to come up
	I0924 19:46:20.790814   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.791264   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:20.791290   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:20.791229   70958 retry.go:31] will retry after 862.850861ms: waiting for machine to come up
	I0924 19:46:21.655227   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:21.655702   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:21.655732   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:21.655652   70958 retry.go:31] will retry after 1.436744818s: waiting for machine to come up
	I0924 19:46:23.093891   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:23.094619   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:23.094652   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:23.094545   70958 retry.go:31] will retry after 1.670034049s: waiting for machine to come up
	I0924 19:46:23.573866   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.867718194s)
	I0924 19:46:23.573911   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0924 19:46:23.573942   69576 ssh_runner.go:235] Completed: which crictl: (1.867653076s)
	I0924 19:46:23.574009   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:23.573947   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:23.574079   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 19:46:24.924292   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.35018601s)
	I0924 19:46:24.924325   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0924 19:46:24.924325   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.350292754s)
	I0924 19:46:24.924351   69576 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:24.924400   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0924 19:46:24.924400   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:24.765982   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:24.766453   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:24.766486   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:24.766399   70958 retry.go:31] will retry after 2.142103801s: waiting for machine to come up
	I0924 19:46:26.911998   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:26.912395   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:26.912425   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:26.912350   70958 retry.go:31] will retry after 1.90953864s: waiting for machine to come up
	I0924 19:46:28.823807   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:28.824294   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:28.824324   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:28.824242   70958 retry.go:31] will retry after 2.249657554s: waiting for machine to come up
	I0924 19:46:28.202705   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.278273074s)
	I0924 19:46:28.202736   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0924 19:46:28.202759   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:28.202781   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.278300546s)
	I0924 19:46:28.202798   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 19:46:28.202862   69576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:29.870161   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.667334937s)
	I0924 19:46:29.870195   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0924 19:46:29.870161   69576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.667273921s)
	I0924 19:46:29.870218   69576 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:29.870248   69576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0924 19:46:29.870269   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 19:46:29.870357   69576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.922800   69576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.05250542s)
	I0924 19:46:31.922865   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0924 19:46:31.922894   69576 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.052511751s)
	I0924 19:46:31.922928   69576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0924 19:46:31.922938   69576 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.922996   69576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0924 19:46:31.076197   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:31.076624   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | unable to find current IP address of domain default-k8s-diff-port-093771 in network mk-default-k8s-diff-port-093771
	I0924 19:46:31.076660   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | I0924 19:46:31.076579   70958 retry.go:31] will retry after 3.538260641s: waiting for machine to come up
	I0924 19:46:35.823566   70152 start.go:364] duration metric: took 3m49.223945366s to acquireMachinesLock for "old-k8s-version-510301"
	I0924 19:46:35.823654   70152 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:35.823666   70152 fix.go:54] fixHost starting: 
	I0924 19:46:35.824101   70152 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:35.824161   70152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:35.844327   70152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38055
	I0924 19:46:35.844741   70152 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:35.845377   70152 main.go:141] libmachine: Using API Version  1
	I0924 19:46:35.845402   70152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:35.845769   70152 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:35.845997   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:35.846186   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetState
	I0924 19:46:35.847728   70152 fix.go:112] recreateIfNeeded on old-k8s-version-510301: state=Stopped err=<nil>
	I0924 19:46:35.847754   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	W0924 19:46:35.847912   70152 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:35.849981   70152 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-510301" ...
	I0924 19:46:35.851388   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .Start
	I0924 19:46:35.851573   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring networks are active...
	I0924 19:46:35.852445   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring network default is active
	I0924 19:46:35.852832   70152 main.go:141] libmachine: (old-k8s-version-510301) Ensuring network mk-old-k8s-version-510301 is active
	I0924 19:46:35.853342   70152 main.go:141] libmachine: (old-k8s-version-510301) Getting domain xml...
	I0924 19:46:35.854028   70152 main.go:141] libmachine: (old-k8s-version-510301) Creating domain...
	I0924 19:46:34.618473   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.618980   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Found IP for machine: 192.168.50.116
	I0924 19:46:34.619006   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Reserving static IP address...
	I0924 19:46:34.619022   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has current primary IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.619475   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-093771", mac: "52:54:00:21:4a:f5", ip: "192.168.50.116"} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.619520   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Reserved static IP address: 192.168.50.116
	I0924 19:46:34.619540   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | skip adding static IP to network mk-default-k8s-diff-port-093771 - found existing host DHCP lease matching {name: "default-k8s-diff-port-093771", mac: "52:54:00:21:4a:f5", ip: "192.168.50.116"}
	I0924 19:46:34.619559   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Getting to WaitForSSH function...
	I0924 19:46:34.619573   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Waiting for SSH to be available...
	I0924 19:46:34.621893   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.622318   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.622346   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.622525   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Using SSH client type: external
	I0924 19:46:34.622553   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa (-rw-------)
	I0924 19:46:34.622584   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:34.622603   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | About to run SSH command:
	I0924 19:46:34.622621   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | exit 0
	I0924 19:46:34.746905   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:34.747246   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetConfigRaw
	I0924 19:46:34.747867   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:34.750507   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.751020   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.751052   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.751327   69904 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/config.json ...
	I0924 19:46:34.751516   69904 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:34.751533   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:34.751773   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.754088   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.754380   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.754400   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.754510   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.754703   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.754988   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.755201   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.755479   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.755714   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.755727   69904 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:34.854791   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:34.854816   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:34.855126   69904 buildroot.go:166] provisioning hostname "default-k8s-diff-port-093771"
	I0924 19:46:34.855157   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:34.855362   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.858116   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.858459   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.858491   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.858639   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.858821   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.859002   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.859124   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.859281   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.859444   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.859458   69904 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-093771 && echo "default-k8s-diff-port-093771" | sudo tee /etc/hostname
	I0924 19:46:34.974247   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-093771
	
	I0924 19:46:34.974285   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:34.977117   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.977514   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:34.977544   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:34.977781   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:34.978011   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.978184   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:34.978326   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:34.978512   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:34.978736   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:34.978761   69904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-093771' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-093771/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-093771' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:35.096102   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:35.096132   69904 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:35.096172   69904 buildroot.go:174] setting up certificates
	I0924 19:46:35.096182   69904 provision.go:84] configureAuth start
	I0924 19:46:35.096192   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetMachineName
	I0924 19:46:35.096501   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:35.099177   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.099529   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.099563   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.099743   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.102392   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.102744   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.102771   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.102941   69904 provision.go:143] copyHostCerts
	I0924 19:46:35.102988   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:35.102996   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:35.103053   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:35.103147   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:35.103155   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:35.103176   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:35.103229   69904 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:35.103237   69904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:35.103255   69904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:35.103319   69904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-093771 san=[127.0.0.1 192.168.50.116 default-k8s-diff-port-093771 localhost minikube]
	I0924 19:46:35.213279   69904 provision.go:177] copyRemoteCerts
	I0924 19:46:35.213364   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:35.213396   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.216668   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.217114   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.217150   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.217374   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.217544   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.217759   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.217937   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.300483   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:35.323893   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0924 19:46:35.346838   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:35.368788   69904 provision.go:87] duration metric: took 272.591773ms to configureAuth
	I0924 19:46:35.368819   69904 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:35.369032   69904 config.go:182] Loaded profile config "default-k8s-diff-port-093771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:35.369107   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.372264   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.372571   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.372601   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.372833   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.373033   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.373221   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.373395   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.373595   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:35.373768   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:35.373800   69904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:35.593954   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:35.593983   69904 machine.go:96] duration metric: took 842.454798ms to provisionDockerMachine
	I0924 19:46:35.593998   69904 start.go:293] postStartSetup for "default-k8s-diff-port-093771" (driver="kvm2")
	I0924 19:46:35.594011   69904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:35.594032   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.594381   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:35.594415   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.597073   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.597475   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.597531   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.597668   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.597886   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.598061   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.598225   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.677749   69904 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:35.682185   69904 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:35.682220   69904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:35.682302   69904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:35.682402   69904 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:35.682514   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:35.692308   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:35.717006   69904 start.go:296] duration metric: took 122.993776ms for postStartSetup
	I0924 19:46:35.717045   69904 fix.go:56] duration metric: took 18.705866197s for fixHost
	I0924 19:46:35.717069   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.720111   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.720478   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.720507   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.720702   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.720913   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.721078   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.721208   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.721368   69904 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:35.721547   69904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0924 19:46:35.721558   69904 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:35.823421   69904 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207195.798332273
	
	I0924 19:46:35.823444   69904 fix.go:216] guest clock: 1727207195.798332273
	I0924 19:46:35.823454   69904 fix.go:229] Guest: 2024-09-24 19:46:35.798332273 +0000 UTC Remote: 2024-09-24 19:46:35.717049796 +0000 UTC m=+256.522802974 (delta=81.282477ms)
	I0924 19:46:35.823478   69904 fix.go:200] guest clock delta is within tolerance: 81.282477ms
	I0924 19:46:35.823484   69904 start.go:83] releasing machines lock for "default-k8s-diff-port-093771", held for 18.812344302s
	I0924 19:46:35.823511   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.823795   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:35.827240   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.827580   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.827612   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.827798   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828501   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828695   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:46:35.828788   69904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:35.828840   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.828982   69904 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:35.829022   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:46:35.831719   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.831888   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832098   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.832125   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832350   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.832419   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:35.832446   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:35.832518   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.832608   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:46:35.832688   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.832761   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:46:35.832834   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.832898   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:46:35.833000   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:46:35.913010   69904 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:35.936917   69904 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:36.082528   69904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:36.090012   69904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:36.090111   69904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:36.109409   69904 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:36.109434   69904 start.go:495] detecting cgroup driver to use...
	I0924 19:46:36.109509   69904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:36.130226   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:36.142975   69904 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:36.143037   69904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:36.159722   69904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:36.174702   69904 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:36.315361   69904 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:36.491190   69904 docker.go:233] disabling docker service ...
	I0924 19:46:36.491259   69904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:36.513843   69904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:36.530208   69904 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:36.658600   69904 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:36.806048   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:36.821825   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:36.841750   69904 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:46:36.841819   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.853349   69904 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:36.853432   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.865214   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.877600   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.889363   69904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:36.901434   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.911763   69904 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.929057   69904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:36.939719   69904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:36.949326   69904 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:36.949399   69904 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:36.969647   69904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:36.984522   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:37.132041   69904 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:37.238531   69904 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:37.238638   69904 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:37.243752   69904 start.go:563] Will wait 60s for crictl version
	I0924 19:46:37.243811   69904 ssh_runner.go:195] Run: which crictl
	I0924 19:46:37.247683   69904 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:37.282843   69904 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:37.282932   69904 ssh_runner.go:195] Run: crio --version
	I0924 19:46:37.318022   69904 ssh_runner.go:195] Run: crio --version
	I0924 19:46:37.356586   69904 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:46:32.569181   69576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0924 19:46:32.569229   69576 cache_images.go:123] Successfully loaded all cached images
	I0924 19:46:32.569236   69576 cache_images.go:92] duration metric: took 14.136066072s to LoadCachedImages
	I0924 19:46:32.569250   69576 kubeadm.go:934] updating node { 192.168.39.134 8443 v1.31.1 crio true true} ...
	I0924 19:46:32.569372   69576 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-965745 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:46:32.569453   69576 ssh_runner.go:195] Run: crio config
	I0924 19:46:32.610207   69576 cni.go:84] Creating CNI manager for ""
	I0924 19:46:32.610236   69576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:32.610247   69576 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:46:32.610284   69576 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.134 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-965745 NodeName:no-preload-965745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:46:32.610407   69576 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-965745"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:46:32.610465   69576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:46:32.620532   69576 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:46:32.620616   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:46:32.629642   69576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 19:46:32.644863   69576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:46:32.659420   69576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0924 19:46:32.674590   69576 ssh_runner.go:195] Run: grep 192.168.39.134	control-plane.minikube.internal$ /etc/hosts
	I0924 19:46:32.677861   69576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:32.688560   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:32.791827   69576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:32.807240   69576 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745 for IP: 192.168.39.134
	I0924 19:46:32.807266   69576 certs.go:194] generating shared ca certs ...
	I0924 19:46:32.807286   69576 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:32.807447   69576 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:46:32.807502   69576 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:46:32.807515   69576 certs.go:256] generating profile certs ...
	I0924 19:46:32.807645   69576 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/client.key
	I0924 19:46:32.807736   69576 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.key.6934b726
	I0924 19:46:32.807799   69576 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.key
	I0924 19:46:32.807950   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:46:32.807997   69576 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:46:32.808011   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:46:32.808045   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:46:32.808076   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:46:32.808111   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:46:32.808168   69576 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:32.809039   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:46:32.866086   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:46:32.892458   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:46:32.925601   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:46:32.956936   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 19:46:32.979570   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 19:46:33.001159   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:46:33.022216   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/no-preload-965745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 19:46:33.044213   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:46:33.065352   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:46:33.086229   69576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:46:33.107040   69576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:46:33.122285   69576 ssh_runner.go:195] Run: openssl version
	I0924 19:46:33.127664   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:46:33.137277   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.141239   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.141289   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:46:33.146498   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:46:33.156352   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:46:33.166235   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.170189   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.170233   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:33.175345   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:46:33.185095   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:46:33.194846   69576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.199024   69576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.199084   69576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:46:33.204244   69576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:46:33.214142   69576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:46:33.218178   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:46:33.223659   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:46:33.228914   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:46:33.234183   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:46:33.239611   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:46:33.244844   69576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:46:33.250012   69576 kubeadm.go:392] StartCluster: {Name:no-preload-965745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-965745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:46:33.250094   69576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:46:33.250128   69576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:33.282919   69576 cri.go:89] found id: ""
	I0924 19:46:33.282980   69576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:46:33.292578   69576 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:46:33.292605   69576 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:46:33.292665   69576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:46:33.301695   69576 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:46:33.303477   69576 kubeconfig.go:125] found "no-preload-965745" server: "https://192.168.39.134:8443"
	I0924 19:46:33.306052   69576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:46:33.314805   69576 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.134
	I0924 19:46:33.314843   69576 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:46:33.314857   69576 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:46:33.314907   69576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:33.346457   69576 cri.go:89] found id: ""
	I0924 19:46:33.346523   69576 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:46:33.361257   69576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:46:33.370192   69576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:46:33.370209   69576 kubeadm.go:157] found existing configuration files:
	
	I0924 19:46:33.370246   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:46:33.378693   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:46:33.378735   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:46:33.387379   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:46:33.395516   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:46:33.395555   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:46:33.404216   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:46:33.412518   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:46:33.412564   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:46:33.421332   69576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:46:33.430004   69576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:46:33.430067   69576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:46:33.438769   69576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:46:33.447918   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:33.547090   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.162139   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.345688   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.400915   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:34.479925   69576 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:46:34.480005   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:34.980773   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:35.480568   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:35.515707   69576 api_server.go:72] duration metric: took 1.035779291s to wait for apiserver process to appear ...
	I0924 19:46:35.515736   69576 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:46:35.515759   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:37.357928   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetIP
	I0924 19:46:37.361222   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:37.361720   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:46:37.361763   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:46:37.362089   69904 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:37.366395   69904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:37.383334   69904 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-093771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:37.383451   69904 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:46:37.383503   69904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:37.425454   69904 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:46:37.425528   69904 ssh_runner.go:195] Run: which lz4
	I0924 19:46:37.430589   69904 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:46:37.435668   69904 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:46:37.435702   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 19:46:38.688183   69904 crio.go:462] duration metric: took 1.257629121s to copy over tarball
	I0924 19:46:38.688265   69904 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:46:38.577925   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:38.577956   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:38.577971   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:38.617929   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:38.617970   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:39.015942   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:39.024069   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:39.024108   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:39.516830   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:39.522389   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:39.522423   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:40.015905   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:40.024316   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:40.024344   69576 api_server.go:103] status: https://192.168.39.134:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:40.515871   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:46:40.524708   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0924 19:46:40.533300   69576 api_server.go:141] control plane version: v1.31.1
	I0924 19:46:40.533330   69576 api_server.go:131] duration metric: took 5.017586868s to wait for apiserver health ...
	I0924 19:46:40.533341   69576 cni.go:84] Creating CNI manager for ""
	I0924 19:46:40.533350   69576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:40.535207   69576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:46:37.184620   70152 main.go:141] libmachine: (old-k8s-version-510301) Waiting to get IP...
	I0924 19:46:37.185660   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.186074   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.186151   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.186052   71118 retry.go:31] will retry after 294.949392ms: waiting for machine to come up
	I0924 19:46:37.482814   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.483327   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.483356   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.483268   71118 retry.go:31] will retry after 344.498534ms: waiting for machine to come up
	I0924 19:46:37.830045   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:37.830715   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:37.830748   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:37.830647   71118 retry.go:31] will retry after 342.025563ms: waiting for machine to come up
	I0924 19:46:38.174408   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:38.176008   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:38.176040   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:38.175906   71118 retry.go:31] will retry after 456.814011ms: waiting for machine to come up
	I0924 19:46:38.634792   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:38.635533   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:38.635566   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:38.635443   71118 retry.go:31] will retry after 582.88697ms: waiting for machine to come up
	I0924 19:46:39.220373   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:39.220869   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:39.220899   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:39.220811   71118 retry.go:31] will retry after 648.981338ms: waiting for machine to come up
	I0924 19:46:39.872016   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:39.872615   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:39.872645   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:39.872571   71118 retry.go:31] will retry after 1.138842254s: waiting for machine to come up
	I0924 19:46:41.012974   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:41.013539   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:41.013575   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:41.013489   71118 retry.go:31] will retry after 996.193977ms: waiting for machine to come up
	I0924 19:46:40.536733   69576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:46:40.547944   69576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:46:40.577608   69576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:46:40.595845   69576 system_pods.go:59] 8 kube-system pods found
	I0924 19:46:40.595910   69576 system_pods.go:61] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:46:40.595922   69576 system_pods.go:61] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:46:40.595934   69576 system_pods.go:61] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:46:40.595947   69576 system_pods.go:61] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:46:40.595957   69576 system_pods.go:61] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 19:46:40.595967   69576 system_pods.go:61] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:46:40.595980   69576 system_pods.go:61] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:46:40.595986   69576 system_pods.go:61] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:46:40.595995   69576 system_pods.go:74] duration metric: took 18.365618ms to wait for pod list to return data ...
	I0924 19:46:40.596006   69576 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:46:40.599781   69576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:46:40.599809   69576 node_conditions.go:123] node cpu capacity is 2
	I0924 19:46:40.599822   69576 node_conditions.go:105] duration metric: took 3.810089ms to run NodePressure ...
	I0924 19:46:40.599842   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:40.916081   69576 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:46:40.921516   69576 kubeadm.go:739] kubelet initialised
	I0924 19:46:40.921545   69576 kubeadm.go:740] duration metric: took 5.434388ms waiting for restarted kubelet to initialise ...
	I0924 19:46:40.921569   69576 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:40.926954   69576 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.931807   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.931825   69576 pod_ready.go:82] duration metric: took 4.85217ms for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.931833   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.931840   69576 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.936614   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "etcd-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.936636   69576 pod_ready.go:82] duration metric: took 4.788888ms for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.936646   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "etcd-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.936654   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.941669   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-apiserver-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.941684   69576 pod_ready.go:82] duration metric: took 5.022921ms for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.941691   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-apiserver-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.941697   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:40.981457   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.981487   69576 pod_ready.go:82] duration metric: took 39.779589ms for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:40.981500   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:40.981512   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:41.381145   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-proxy-ng8vf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.381172   69576 pod_ready.go:82] duration metric: took 399.651445ms for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:41.381183   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-proxy-ng8vf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.381191   69576 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:41.780780   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "kube-scheduler-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.780802   69576 pod_ready.go:82] duration metric: took 399.60413ms for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:41.780811   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "kube-scheduler-no-preload-965745" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:41.780818   69576 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:42.181235   69576 pod_ready.go:98] node "no-preload-965745" hosting pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:42.181264   69576 pod_ready.go:82] duration metric: took 400.43573ms for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:42.181278   69576 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-965745" hosting pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:42.181287   69576 pod_ready.go:39] duration metric: took 1.259692411s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:42.181306   69576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:46:42.192253   69576 ops.go:34] apiserver oom_adj: -16
	I0924 19:46:42.192274   69576 kubeadm.go:597] duration metric: took 8.899661487s to restartPrimaryControlPlane
	I0924 19:46:42.192285   69576 kubeadm.go:394] duration metric: took 8.942279683s to StartCluster
	I0924 19:46:42.192302   69576 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:42.192388   69576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:46:42.194586   69576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:42.194926   69576 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:46:42.195047   69576 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:46:42.195118   69576 addons.go:69] Setting storage-provisioner=true in profile "no-preload-965745"
	I0924 19:46:42.195137   69576 addons.go:234] Setting addon storage-provisioner=true in "no-preload-965745"
	W0924 19:46:42.195145   69576 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:46:42.195150   69576 addons.go:69] Setting default-storageclass=true in profile "no-preload-965745"
	I0924 19:46:42.195167   69576 addons.go:69] Setting metrics-server=true in profile "no-preload-965745"
	I0924 19:46:42.195174   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.195177   69576 config.go:182] Loaded profile config "no-preload-965745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:46:42.195182   69576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-965745"
	I0924 19:46:42.195185   69576 addons.go:234] Setting addon metrics-server=true in "no-preload-965745"
	W0924 19:46:42.195194   69576 addons.go:243] addon metrics-server should already be in state true
	I0924 19:46:42.195219   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.195593   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195609   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195629   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.195643   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.195658   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.195736   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.196723   69576 out.go:177] * Verifying Kubernetes components...
	I0924 19:46:42.198152   69576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:42.212617   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0924 19:46:42.213165   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.213669   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.213695   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.214078   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.214268   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.216100   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45549
	I0924 19:46:42.216467   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.216915   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.216934   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.217274   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.217317   69576 addons.go:234] Setting addon default-storageclass=true in "no-preload-965745"
	W0924 19:46:42.217329   69576 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:46:42.217357   69576 host.go:66] Checking if "no-preload-965745" exists ...
	I0924 19:46:42.217629   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.217666   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.217870   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.217915   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.236569   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I0924 19:46:42.236995   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.236999   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I0924 19:46:42.237477   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.237606   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.237630   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.237989   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.238081   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.238103   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.238605   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.238645   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.238851   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.239570   69576 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:42.239624   69576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:42.243303   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0924 19:46:42.243749   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.244205   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.244225   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.244541   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.244860   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.246518   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.248349   69576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:42.249690   69576 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:46:42.249706   69576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:46:42.249724   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.256169   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0924 19:46:42.256413   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.256626   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.256648   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.256801   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.256952   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.257080   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.257136   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.257247   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.257656   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.257671   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.257975   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.258190   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.259449   69576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34329
	I0924 19:46:42.259667   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.260521   69576 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:42.260996   69576 main.go:141] libmachine: Using API Version  1
	I0924 19:46:42.261009   69576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:42.261374   69576 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:42.261457   69576 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:46:42.261544   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetState
	I0924 19:46:42.262754   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:46:42.262769   69576 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:46:42.262787   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.263351   69576 main.go:141] libmachine: (no-preload-965745) Calling .DriverName
	I0924 19:46:42.263661   69576 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:46:42.263677   69576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:46:42.263691   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHHostname
	I0924 19:46:42.266205   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.266653   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.266672   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.266974   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.267122   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.267234   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.267342   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.267589   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.267935   69576 main.go:141] libmachine: (no-preload-965745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:4b:79", ip: ""} in network mk-no-preload-965745: {Iface:virbr1 ExpiryTime:2024-09-24 20:46:08 +0000 UTC Type:0 Mac:52:54:00:c4:4b:79 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:no-preload-965745 Clientid:01:52:54:00:c4:4b:79}
	I0924 19:46:42.267951   69576 main.go:141] libmachine: (no-preload-965745) DBG | domain no-preload-965745 has defined IP address 192.168.39.134 and MAC address 52:54:00:c4:4b:79 in network mk-no-preload-965745
	I0924 19:46:42.268213   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHPort
	I0924 19:46:42.268331   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHKeyPath
	I0924 19:46:42.268417   69576 main.go:141] libmachine: (no-preload-965745) Calling .GetSSHUsername
	I0924 19:46:42.268562   69576 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/no-preload-965745/id_rsa Username:docker}
	I0924 19:46:42.408715   69576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:42.425635   69576 node_ready.go:35] waiting up to 6m0s for node "no-preload-965745" to be "Ready" ...
	I0924 19:46:40.944536   69904 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256242572s)
	I0924 19:46:40.944565   69904 crio.go:469] duration metric: took 2.25635162s to extract the tarball
	I0924 19:46:40.944574   69904 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:46:40.981609   69904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:41.019006   69904 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 19:46:41.019026   69904 cache_images.go:84] Images are preloaded, skipping loading
	I0924 19:46:41.019035   69904 kubeadm.go:934] updating node { 192.168.50.116 8444 v1.31.1 crio true true} ...
	I0924 19:46:41.019146   69904 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-093771 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:46:41.019233   69904 ssh_runner.go:195] Run: crio config
	I0924 19:46:41.064904   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:46:41.064927   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:41.064938   69904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:46:41.064957   69904 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.116 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-093771 NodeName:default-k8s-diff-port-093771 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:46:41.065089   69904 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.116
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-093771"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:46:41.065142   69904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:46:41.075518   69904 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:46:41.075604   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:46:41.084461   69904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0924 19:46:41.099383   69904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:46:41.114093   69904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0924 19:46:41.129287   69904 ssh_runner.go:195] Run: grep 192.168.50.116	control-plane.minikube.internal$ /etc/hosts
	I0924 19:46:41.132690   69904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:41.144620   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:41.258218   69904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:46:41.279350   69904 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771 for IP: 192.168.50.116
	I0924 19:46:41.279373   69904 certs.go:194] generating shared ca certs ...
	I0924 19:46:41.279393   69904 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:46:41.279592   69904 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:46:41.279668   69904 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:46:41.279685   69904 certs.go:256] generating profile certs ...
	I0924 19:46:41.279806   69904 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/client.key
	I0924 19:46:41.279905   69904 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.key.ee3880b0
	I0924 19:46:41.279968   69904 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.key
	I0924 19:46:41.280139   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:46:41.280176   69904 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:46:41.280189   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:46:41.280248   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:46:41.280292   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:46:41.280324   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:46:41.280379   69904 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:41.281191   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:46:41.319225   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:46:41.343585   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:46:41.373080   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:46:41.405007   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0924 19:46:41.434543   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 19:46:41.458642   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:46:41.480848   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/default-k8s-diff-port-093771/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:46:41.502778   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:46:41.525217   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:46:41.548290   69904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:46:41.572569   69904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:46:41.591631   69904 ssh_runner.go:195] Run: openssl version
	I0924 19:46:41.598407   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:46:41.611310   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.616372   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.616425   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:46:41.621818   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:46:41.631262   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:46:41.641685   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.645781   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.645827   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:46:41.651168   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:46:41.664296   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:46:41.677001   69904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.681609   69904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.681650   69904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:46:41.686733   69904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:46:41.696235   69904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:46:41.700431   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:46:41.705979   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:46:41.711363   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:46:41.716911   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:46:41.722137   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:46:41.727363   69904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:46:41.732646   69904 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-093771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-093771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:46:41.732750   69904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:46:41.732791   69904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:41.766796   69904 cri.go:89] found id: ""
	I0924 19:46:41.766883   69904 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:46:41.776244   69904 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:46:41.776268   69904 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:46:41.776316   69904 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:46:41.786769   69904 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:46:41.787665   69904 kubeconfig.go:125] found "default-k8s-diff-port-093771" server: "https://192.168.50.116:8444"
	I0924 19:46:41.789591   69904 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:46:41.798561   69904 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.116
	I0924 19:46:41.798596   69904 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:46:41.798617   69904 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:46:41.798661   69904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:46:41.839392   69904 cri.go:89] found id: ""
	I0924 19:46:41.839469   69904 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:46:41.854464   69904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:46:41.863006   69904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:46:41.863023   69904 kubeadm.go:157] found existing configuration files:
	
	I0924 19:46:41.863082   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 19:46:41.871086   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:46:41.871138   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:46:41.880003   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 19:46:41.890123   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:46:41.890171   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:46:41.901736   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 19:46:41.909613   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:46:41.909670   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:46:41.921595   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 19:46:41.932589   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:46:41.932654   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:46:41.943735   69904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:46:41.952064   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:42.065934   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:42.948388   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.183687   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.264336   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:43.353897   69904 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:46:43.353979   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:43.854330   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:42.514864   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:46:42.533161   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:46:42.533181   69576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:46:42.539876   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:46:42.564401   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:46:42.564427   69576 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:46:42.598218   69576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:46:42.598243   69576 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:46:42.619014   69576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:46:44.487219   69576 node_ready.go:53] node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:45.026145   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.511239735s)
	I0924 19:46:45.026401   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.026416   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.026281   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.486373933s)
	I0924 19:46:45.026501   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.026514   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030099   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030118   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030151   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030162   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030166   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030171   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.030175   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030179   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030184   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.030192   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.030494   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.030544   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030562   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.030634   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.030662   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.041980   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.042007   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.042336   69576 main.go:141] libmachine: (no-preload-965745) DBG | Closing plugin on server side
	I0924 19:46:45.042391   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.042424   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.120637   69576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.501525022s)
	I0924 19:46:45.120699   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.120714   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.121114   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.121173   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.121197   69576 main.go:141] libmachine: Making call to close driver server
	I0924 19:46:45.121222   69576 main.go:141] libmachine: (no-preload-965745) Calling .Close
	I0924 19:46:45.122653   69576 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:46:45.122671   69576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:46:45.122683   69576 addons.go:475] Verifying addon metrics-server=true in "no-preload-965745"
	I0924 19:46:45.124698   69576 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 19:46:42.011562   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:42.011963   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:42.011986   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:42.011932   71118 retry.go:31] will retry after 1.827996528s: waiting for machine to come up
	I0924 19:46:43.841529   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:43.842075   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:43.842106   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:43.842030   71118 retry.go:31] will retry after 2.224896366s: waiting for machine to come up
	I0924 19:46:46.068290   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:46.068761   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:46.068784   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:46.068736   71118 retry.go:31] will retry after 2.630690322s: waiting for machine to come up
	I0924 19:46:45.126030   69576 addons.go:510] duration metric: took 2.930987175s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 19:46:46.930203   69576 node_ready.go:53] node "no-preload-965745" has status "Ready":"False"
	I0924 19:46:44.354690   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:44.854316   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:45.354861   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:46:45.370596   69904 api_server.go:72] duration metric: took 2.016695722s to wait for apiserver process to appear ...
	I0924 19:46:45.370626   69904 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:46:45.370655   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:45.371182   69904 api_server.go:269] stopped: https://192.168.50.116:8444/healthz: Get "https://192.168.50.116:8444/healthz": dial tcp 192.168.50.116:8444: connect: connection refused
	I0924 19:46:45.870725   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.042928   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:48.042957   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:48.042985   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.054732   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:46:48.054759   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:46:48.371230   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.381025   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:48.381058   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:48.871669   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:48.878407   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:46:48.878440   69904 api_server.go:103] status: https://192.168.50.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:46:49.371018   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:46:49.375917   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 200:
	ok
	I0924 19:46:49.383318   69904 api_server.go:141] control plane version: v1.31.1
	I0924 19:46:49.383352   69904 api_server.go:131] duration metric: took 4.012718503s to wait for apiserver health ...
	I0924 19:46:49.383362   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:46:49.383368   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:46:49.385326   69904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:46:48.700927   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:48.701338   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | unable to find current IP address of domain old-k8s-version-510301 in network mk-old-k8s-version-510301
	I0924 19:46:48.701367   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | I0924 19:46:48.701291   71118 retry.go:31] will retry after 3.546152526s: waiting for machine to come up
	I0924 19:46:48.934204   69576 node_ready.go:49] node "no-preload-965745" has status "Ready":"True"
	I0924 19:46:48.934238   69576 node_ready.go:38] duration metric: took 6.508559983s for node "no-preload-965745" to be "Ready" ...
	I0924 19:46:48.934250   69576 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:48.941949   69576 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:48.947063   69576 pod_ready.go:93] pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:48.947094   69576 pod_ready.go:82] duration metric: took 5.112983ms for pod "coredns-7c65d6cfc9-qb2mm" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:48.947106   69576 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.953349   69576 pod_ready.go:103] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:53.519204   69408 start.go:364] duration metric: took 49.813943111s to acquireMachinesLock for "embed-certs-311319"
	I0924 19:46:53.519255   69408 start.go:96] Skipping create...Using existing machine configuration
	I0924 19:46:53.519264   69408 fix.go:54] fixHost starting: 
	I0924 19:46:53.519644   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:46:53.519688   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:46:53.536327   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0924 19:46:53.536874   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:46:53.537424   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:46:53.537449   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:46:53.537804   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:46:53.538009   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:46:53.538172   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:46:53.539842   69408 fix.go:112] recreateIfNeeded on embed-certs-311319: state=Stopped err=<nil>
	I0924 19:46:53.539866   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	W0924 19:46:53.540003   69408 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 19:46:53.541719   69408 out.go:177] * Restarting existing kvm2 VM for "embed-certs-311319" ...
	I0924 19:46:49.386740   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:46:49.398816   69904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:46:49.416805   69904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:46:49.428112   69904 system_pods.go:59] 8 kube-system pods found
	I0924 19:46:49.428153   69904 system_pods.go:61] "coredns-7c65d6cfc9-h4nm8" [621c3ebb-1eb3-47a4-ba87-68e9caa2f3f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:46:49.428175   69904 system_pods.go:61] "etcd-default-k8s-diff-port-093771" [4251f310-2a54-4473-91ba-0aa57247a8e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:46:49.428196   69904 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-093771" [13840d0f-dca8-4b9e-876f-e664bd2ec6e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:46:49.428210   69904 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-093771" [30bbbd4d-8609-47fd-9a9f-373a5b63d785] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:46:49.428220   69904 system_pods.go:61] "kube-proxy-4gx4g" [de627472-1155-4ce3-b910-15657e93988e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 19:46:49.428232   69904 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-093771" [b1edae56-d98a-4fc8-8a99-c6e27f485c91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:46:49.428244   69904 system_pods.go:61] "metrics-server-6867b74b74-rgcll" [11de5d03-9c99-4536-9cfd-b33fe2e11fae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:46:49.428256   69904 system_pods.go:61] "storage-provisioner" [3c29f75e-1570-42cd-8430-284527878197] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 19:46:49.428269   69904 system_pods.go:74] duration metric: took 11.441258ms to wait for pod list to return data ...
	I0924 19:46:49.428288   69904 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:46:49.432173   69904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:46:49.432198   69904 node_conditions.go:123] node cpu capacity is 2
	I0924 19:46:49.432207   69904 node_conditions.go:105] duration metric: took 3.913746ms to run NodePressure ...
	I0924 19:46:49.432221   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:46:49.707599   69904 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:46:49.712788   69904 kubeadm.go:739] kubelet initialised
	I0924 19:46:49.712808   69904 kubeadm.go:740] duration metric: took 5.18017ms waiting for restarted kubelet to initialise ...
	I0924 19:46:49.712816   69904 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:46:49.725245   69904 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.731600   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.731624   69904 pod_ready.go:82] duration metric: took 6.354998ms for pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.731633   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "coredns-7c65d6cfc9-h4nm8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.731639   69904 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.737044   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.737067   69904 pod_ready.go:82] duration metric: took 5.419976ms for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.737083   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.737092   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.742151   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.742170   69904 pod_ready.go:82] duration metric: took 5.067452ms for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.742180   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.742185   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:49.823203   69904 pod_ready.go:98] node "default-k8s-diff-port-093771" hosting pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.823237   69904 pod_ready.go:82] duration metric: took 81.044673ms for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	E0924 19:46:49.823253   69904 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-093771" hosting pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-093771" has status "Ready":"False"
	I0924 19:46:49.823262   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4gx4g" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.220171   69904 pod_ready.go:93] pod "kube-proxy-4gx4g" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:50.220207   69904 pod_ready.go:82] duration metric: took 396.929531ms for pod "kube-proxy-4gx4g" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:50.220219   69904 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:52.227683   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:52.249370   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.249921   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has current primary IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.249953   70152 main.go:141] libmachine: (old-k8s-version-510301) Found IP for machine: 192.168.72.81
	I0924 19:46:52.249967   70152 main.go:141] libmachine: (old-k8s-version-510301) Reserving static IP address...
	I0924 19:46:52.250395   70152 main.go:141] libmachine: (old-k8s-version-510301) Reserved static IP address: 192.168.72.81
	I0924 19:46:52.250438   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "old-k8s-version-510301", mac: "52:54:00:72:11:f0", ip: "192.168.72.81"} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.250453   70152 main.go:141] libmachine: (old-k8s-version-510301) Waiting for SSH to be available...
	I0924 19:46:52.250479   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | skip adding static IP to network mk-old-k8s-version-510301 - found existing host DHCP lease matching {name: "old-k8s-version-510301", mac: "52:54:00:72:11:f0", ip: "192.168.72.81"}
	I0924 19:46:52.250492   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Getting to WaitForSSH function...
	I0924 19:46:52.252807   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.253148   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.253176   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.253278   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using SSH client type: external
	I0924 19:46:52.253300   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa (-rw-------)
	I0924 19:46:52.253332   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:46:52.253345   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | About to run SSH command:
	I0924 19:46:52.253354   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | exit 0
	I0924 19:46:52.378625   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | SSH cmd err, output: <nil>: 
	I0924 19:46:52.379067   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetConfigRaw
	I0924 19:46:52.379793   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:52.382222   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.382618   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.382647   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.382925   70152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/config.json ...
	I0924 19:46:52.383148   70152 machine.go:93] provisionDockerMachine start ...
	I0924 19:46:52.383174   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:52.383374   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.385984   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.386434   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.386460   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.386614   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.386788   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.387002   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.387167   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.387396   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.387632   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.387645   70152 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:46:52.503003   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:46:52.503033   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.503320   70152 buildroot.go:166] provisioning hostname "old-k8s-version-510301"
	I0924 19:46:52.503344   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.503630   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.506502   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.506817   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.506858   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.507028   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.507216   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.507394   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.507584   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.507792   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.508016   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.508034   70152 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-510301 && echo "old-k8s-version-510301" | sudo tee /etc/hostname
	I0924 19:46:52.634014   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-510301
	
	I0924 19:46:52.634040   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.636807   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.637156   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.637186   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.637331   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.637528   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.637721   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.637866   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.638016   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:52.638228   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:52.638252   70152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-510301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-510301/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-510301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:46:52.754583   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:46:52.754613   70152 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:46:52.754645   70152 buildroot.go:174] setting up certificates
	I0924 19:46:52.754653   70152 provision.go:84] configureAuth start
	I0924 19:46:52.754664   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetMachineName
	I0924 19:46:52.754975   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:52.757674   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.758024   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.758047   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.758158   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.760405   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.760722   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.760751   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.760869   70152 provision.go:143] copyHostCerts
	I0924 19:46:52.760928   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:46:52.760942   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:46:52.761009   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:46:52.761125   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:46:52.761141   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:46:52.761180   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:46:52.761262   70152 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:46:52.761274   70152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:46:52.761301   70152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:46:52.761375   70152 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-510301 san=[127.0.0.1 192.168.72.81 localhost minikube old-k8s-version-510301]
	I0924 19:46:52.906522   70152 provision.go:177] copyRemoteCerts
	I0924 19:46:52.906586   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:46:52.906606   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:52.909264   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.909580   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:52.909622   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:52.909777   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:52.909960   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:52.910206   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:52.910313   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:52.997129   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:46:53.020405   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 19:46:53.042194   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 19:46:53.063422   70152 provision.go:87] duration metric: took 308.753857ms to configureAuth
	I0924 19:46:53.063448   70152 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:46:53.063662   70152 config.go:182] Loaded profile config "old-k8s-version-510301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:46:53.063752   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.066435   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.066850   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.066877   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.067076   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.067247   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.067382   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.067546   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.067749   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:53.067935   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:53.067958   70152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:46:53.288436   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:46:53.288463   70152 machine.go:96] duration metric: took 905.298763ms to provisionDockerMachine
	I0924 19:46:53.288476   70152 start.go:293] postStartSetup for "old-k8s-version-510301" (driver="kvm2")
	I0924 19:46:53.288486   70152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:46:53.288513   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.288841   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:46:53.288869   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.291363   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.291643   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.291660   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.291867   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.292054   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.292210   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.292337   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.372984   70152 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:46:53.377049   70152 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:46:53.377072   70152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:46:53.377158   70152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:46:53.377250   70152 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:46:53.377339   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:46:53.385950   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:46:53.408609   70152 start.go:296] duration metric: took 120.112789ms for postStartSetup
	I0924 19:46:53.408654   70152 fix.go:56] duration metric: took 17.584988201s for fixHost
	I0924 19:46:53.408677   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.411723   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.412100   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.412124   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.412309   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.412544   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.412752   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.412892   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.413075   70152 main.go:141] libmachine: Using SSH client type: native
	I0924 19:46:53.413260   70152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0924 19:46:53.413272   70152 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:46:53.519060   70152 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207213.488062061
	
	I0924 19:46:53.519081   70152 fix.go:216] guest clock: 1727207213.488062061
	I0924 19:46:53.519090   70152 fix.go:229] Guest: 2024-09-24 19:46:53.488062061 +0000 UTC Remote: 2024-09-24 19:46:53.408658589 +0000 UTC m=+246.951196346 (delta=79.403472ms)
	I0924 19:46:53.519120   70152 fix.go:200] guest clock delta is within tolerance: 79.403472ms
	I0924 19:46:53.519127   70152 start.go:83] releasing machines lock for "old-k8s-version-510301", held for 17.695500754s
	I0924 19:46:53.519158   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.519439   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:53.522059   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.522454   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.522483   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.522639   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523144   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523344   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .DriverName
	I0924 19:46:53.523432   70152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:46:53.523470   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.523577   70152 ssh_runner.go:195] Run: cat /version.json
	I0924 19:46:53.523614   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHHostname
	I0924 19:46:53.526336   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.526804   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.526845   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.526874   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.527024   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.527216   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.527354   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:53.527358   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.527382   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:53.527484   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.527599   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHPort
	I0924 19:46:53.527742   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHKeyPath
	I0924 19:46:53.527925   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetSSHUsername
	I0924 19:46:53.528073   70152 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/old-k8s-version-510301/id_rsa Username:docker}
	I0924 19:46:53.625956   70152 ssh_runner.go:195] Run: systemctl --version
	I0924 19:46:53.631927   70152 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:46:53.769800   70152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:46:53.776028   70152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:46:53.776076   70152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:46:53.792442   70152 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:46:53.792476   70152 start.go:495] detecting cgroup driver to use...
	I0924 19:46:53.792558   70152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:46:53.813239   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:46:53.827951   70152 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:46:53.828011   70152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:46:53.840962   70152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:46:53.853498   70152 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:46:53.957380   70152 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:46:54.123019   70152 docker.go:233] disabling docker service ...
	I0924 19:46:54.123087   70152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:46:54.138033   70152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:46:54.153414   70152 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:46:54.286761   70152 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:46:54.411013   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:46:54.432184   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:46:54.449924   70152 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 19:46:54.450001   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.459689   70152 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:46:54.459745   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.469555   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.480875   70152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:46:54.490860   70152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:46:54.503933   70152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:46:54.513383   70152 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:46:54.513444   70152 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:46:54.527180   70152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:46:54.539778   70152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:46:54.676320   70152 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:46:54.774914   70152 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:46:54.775027   70152 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:46:54.780383   70152 start.go:563] Will wait 60s for crictl version
	I0924 19:46:54.780457   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:54.785066   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:46:54.825711   70152 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:46:54.825792   70152 ssh_runner.go:195] Run: crio --version
	I0924 19:46:54.861643   70152 ssh_runner.go:195] Run: crio --version
	I0924 19:46:54.905425   70152 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 19:46:53.542904   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Start
	I0924 19:46:53.543092   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring networks are active...
	I0924 19:46:53.543799   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring network default is active
	I0924 19:46:53.544155   69408 main.go:141] libmachine: (embed-certs-311319) Ensuring network mk-embed-certs-311319 is active
	I0924 19:46:53.544586   69408 main.go:141] libmachine: (embed-certs-311319) Getting domain xml...
	I0924 19:46:53.545860   69408 main.go:141] libmachine: (embed-certs-311319) Creating domain...
	I0924 19:46:54.960285   69408 main.go:141] libmachine: (embed-certs-311319) Waiting to get IP...
	I0924 19:46:54.961237   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:54.961738   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:54.961831   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:54.961724   71297 retry.go:31] will retry after 193.067485ms: waiting for machine to come up
	I0924 19:46:55.156270   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:55.156850   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:55.156881   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:55.156806   71297 retry.go:31] will retry after 374.820173ms: waiting for machine to come up
	I0924 19:46:55.533606   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:55.534201   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:55.534235   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:55.534160   71297 retry.go:31] will retry after 469.993304ms: waiting for machine to come up
	I0924 19:46:56.005971   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:56.006513   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:56.006544   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:56.006471   71297 retry.go:31] will retry after 418.910837ms: waiting for machine to come up
	I0924 19:46:54.906585   70152 main.go:141] libmachine: (old-k8s-version-510301) Calling .GetIP
	I0924 19:46:54.909353   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:54.909736   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:11:f0", ip: ""} in network mk-old-k8s-version-510301: {Iface:virbr2 ExpiryTime:2024-09-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:72:11:f0 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-510301 Clientid:01:52:54:00:72:11:f0}
	I0924 19:46:54.909766   70152 main.go:141] libmachine: (old-k8s-version-510301) DBG | domain old-k8s-version-510301 has defined IP address 192.168.72.81 and MAC address 52:54:00:72:11:f0 in network mk-old-k8s-version-510301
	I0924 19:46:54.909970   70152 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0924 19:46:54.915290   70152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:46:54.927316   70152 kubeadm.go:883] updating cluster {Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:46:54.927427   70152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 19:46:54.927465   70152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:54.971020   70152 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:46:54.971090   70152 ssh_runner.go:195] Run: which lz4
	I0924 19:46:54.975775   70152 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:46:54.979807   70152 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:46:54.979865   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 19:46:56.372682   70152 crio.go:462] duration metric: took 1.396951861s to copy over tarball
	I0924 19:46:56.372750   70152 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:46:53.453495   69576 pod_ready.go:103] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:53.954341   69576 pod_ready.go:93] pod "etcd-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.954366   69576 pod_ready.go:82] duration metric: took 5.007252183s for pod "etcd-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.954375   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.959461   69576 pod_ready.go:93] pod "kube-apiserver-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.959485   69576 pod_ready.go:82] duration metric: took 5.103045ms for pod "kube-apiserver-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.959498   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.964289   69576 pod_ready.go:93] pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.964316   69576 pod_ready.go:82] duration metric: took 4.809404ms for pod "kube-controller-manager-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.964329   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.968263   69576 pod_ready.go:93] pod "kube-proxy-ng8vf" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.968286   69576 pod_ready.go:82] duration metric: took 3.947497ms for pod "kube-proxy-ng8vf" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.968296   69576 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.971899   69576 pod_ready.go:93] pod "kube-scheduler-no-preload-965745" in "kube-system" namespace has status "Ready":"True"
	I0924 19:46:53.971916   69576 pod_ready.go:82] duration metric: took 3.613023ms for pod "kube-scheduler-no-preload-965745" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:53.971924   69576 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	I0924 19:46:55.980226   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:54.728787   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:57.226216   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:59.227939   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:46:56.427214   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:56.427600   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:56.427638   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:56.427551   71297 retry.go:31] will retry after 631.22309ms: waiting for machine to come up
	I0924 19:46:57.059888   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:57.060269   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:57.060299   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:57.060219   71297 retry.go:31] will retry after 833.784855ms: waiting for machine to come up
	I0924 19:46:57.895228   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:57.895693   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:57.895711   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:57.895641   71297 retry.go:31] will retry after 1.12615573s: waiting for machine to come up
	I0924 19:46:59.023342   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:46:59.023824   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:46:59.023853   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:46:59.023770   71297 retry.go:31] will retry after 1.020351559s: waiting for machine to come up
	I0924 19:47:00.045373   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:00.045833   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:00.045860   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:00.045779   71297 retry.go:31] will retry after 1.127245815s: waiting for machine to come up
	I0924 19:46:59.298055   70152 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.925272101s)
	I0924 19:46:59.298082   70152 crio.go:469] duration metric: took 2.925375511s to extract the tarball
	I0924 19:46:59.298091   70152 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:46:59.340896   70152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:46:59.374335   70152 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 19:46:59.374358   70152 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 19:46:59.374431   70152 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:59.374463   70152 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.374468   70152 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.374489   70152 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.374514   70152 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.374434   70152 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.374582   70152 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.374624   70152 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 19:46:59.375796   70152 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.375857   70152 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.375925   70152 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.375869   70152 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.376062   70152 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.376154   70152 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:46:59.376357   70152 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.376419   70152 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 19:46:59.521289   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.525037   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.526549   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.536791   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.545312   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.553847   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 19:46:59.558387   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.611119   70152 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 19:46:59.611167   70152 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.611219   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.659190   70152 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 19:46:59.659234   70152 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.659282   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.660489   70152 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 19:46:59.660522   70152 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 19:46:59.660529   70152 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.660558   70152 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.660591   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.660596   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.686686   70152 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 19:46:59.686728   70152 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.686777   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698274   70152 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 19:46:59.698313   70152 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 19:46:59.698366   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698379   70152 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 19:46:59.698410   70152 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.698449   70152 ssh_runner.go:195] Run: which crictl
	I0924 19:46:59.698451   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.698462   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.698523   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.698527   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.698573   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.795169   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.795179   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.795201   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.805639   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.817474   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:46:59.817485   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.817538   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:46:59.917772   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 19:46:59.921025   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 19:46:59.929651   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 19:46:59.955330   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 19:46:59.955344   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 19:46:59.969966   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:46:59.969966   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:47:00.058059   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 19:47:00.058134   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 19:47:00.058178   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 19:47:00.078489   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 19:47:00.078543   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 19:47:00.091137   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 19:47:00.091212   70152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 19:47:00.132385   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 19:47:00.140154   70152 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 19:47:00.328511   70152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:47:00.468550   70152 cache_images.go:92] duration metric: took 1.094174976s to LoadCachedImages
	W0924 19:47:00.468674   70152 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19700-3751/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0924 19:47:00.468693   70152 kubeadm.go:934] updating node { 192.168.72.81 8443 v1.20.0 crio true true} ...
	I0924 19:47:00.468831   70152 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-510301 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:47:00.468918   70152 ssh_runner.go:195] Run: crio config
	I0924 19:47:00.521799   70152 cni.go:84] Creating CNI manager for ""
	I0924 19:47:00.521826   70152 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:00.521836   70152 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:47:00.521858   70152 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-510301 NodeName:old-k8s-version-510301 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 19:47:00.521992   70152 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-510301"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:47:00.522051   70152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 19:47:00.534799   70152 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:47:00.534888   70152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:47:00.546863   70152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0924 19:47:00.565623   70152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:47:00.583242   70152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0924 19:47:00.600113   70152 ssh_runner.go:195] Run: grep 192.168.72.81	control-plane.minikube.internal$ /etc/hosts
	I0924 19:47:00.603653   70152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:00.618699   70152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:00.746348   70152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:47:00.767201   70152 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301 for IP: 192.168.72.81
	I0924 19:47:00.767228   70152 certs.go:194] generating shared ca certs ...
	I0924 19:47:00.767246   70152 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:00.767418   70152 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:47:00.767468   70152 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:47:00.767482   70152 certs.go:256] generating profile certs ...
	I0924 19:47:00.767607   70152 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/client.key
	I0924 19:47:00.767675   70152 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key.32de9897
	I0924 19:47:00.767726   70152 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key
	I0924 19:47:00.767866   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:47:00.767903   70152 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:47:00.767916   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:47:00.767950   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:47:00.767980   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:47:00.768013   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:47:00.768064   70152 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:00.768651   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:47:00.819295   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:47:00.858368   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:47:00.903694   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:47:00.930441   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 19:47:00.960346   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 19:47:00.988938   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:47:01.014165   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/old-k8s-version-510301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 19:47:01.038384   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:47:01.061430   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:47:01.083761   70152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:47:01.105996   70152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:47:01.121529   70152 ssh_runner.go:195] Run: openssl version
	I0924 19:47:01.127294   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:47:01.139547   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.143897   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.143956   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:47:01.149555   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:47:01.159823   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:47:01.170730   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.175500   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.175635   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:47:01.181445   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:47:01.194810   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:47:01.205193   70152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.209256   70152 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.209316   70152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:01.214946   70152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:47:01.225368   70152 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:47:01.229833   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:47:01.235652   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:47:01.241158   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:47:01.248213   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:47:01.255001   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:47:01.262990   70152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:47:01.270069   70152 kubeadm.go:392] StartCluster: {Name:old-k8s-version-510301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-510301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:47:01.270166   70152 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:47:01.270211   70152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:01.310648   70152 cri.go:89] found id: ""
	I0924 19:47:01.310759   70152 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:47:01.321111   70152 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:47:01.321133   70152 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:47:01.321182   70152 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:47:01.330754   70152 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:47:01.331880   70152 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-510301" does not appear in /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:47:01.332435   70152 kubeconfig.go:62] /home/jenkins/minikube-integration/19700-3751/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-510301" cluster setting kubeconfig missing "old-k8s-version-510301" context setting]
	I0924 19:47:01.333336   70152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:01.390049   70152 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:47:01.402246   70152 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.81
	I0924 19:47:01.402281   70152 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:47:01.402295   70152 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:47:01.402346   70152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:01.443778   70152 cri.go:89] found id: ""
	I0924 19:47:01.443851   70152 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:47:01.459836   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:47:01.469392   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:47:01.469414   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:47:01.469454   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:47:01.480329   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:47:01.480402   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:47:01.489799   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:46:58.478282   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:00.478523   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:02.478757   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:01.400039   69904 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:02.984025   69904 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:02.984060   69904 pod_ready.go:82] duration metric: took 12.763830222s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:02.984074   69904 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:01.175244   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:01.175766   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:01.175794   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:01.175728   71297 retry.go:31] will retry after 2.109444702s: waiting for machine to come up
	I0924 19:47:03.288172   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:03.288747   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:03.288815   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:03.288726   71297 retry.go:31] will retry after 1.856538316s: waiting for machine to come up
	I0924 19:47:05.147261   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:05.147676   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:05.147705   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:05.147631   71297 retry.go:31] will retry after 3.46026185s: waiting for machine to come up
	I0924 19:47:01.499967   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:47:01.500023   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:47:01.508842   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:47:01.517564   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:47:01.517620   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:47:01.527204   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:47:01.536656   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:47:01.536718   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:47:01.546282   70152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:47:01.555548   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:01.755130   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.379331   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.601177   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.739476   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:02.829258   70152 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:47:02.829347   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:03.330254   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:03.830452   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.329738   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.829469   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:05.329754   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:05.830117   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:06.329834   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:04.978616   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:07.478201   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:04.990988   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:07.489888   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:08.610127   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:08.610582   69408 main.go:141] libmachine: (embed-certs-311319) DBG | unable to find current IP address of domain embed-certs-311319 in network mk-embed-certs-311319
	I0924 19:47:08.610609   69408 main.go:141] libmachine: (embed-certs-311319) DBG | I0924 19:47:08.610530   71297 retry.go:31] will retry after 3.91954304s: waiting for machine to come up
	I0924 19:47:06.830043   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:07.330209   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:07.830432   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:08.329603   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:08.829525   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.330455   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.830130   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:10.329475   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:10.829474   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:11.330269   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:09.977113   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:11.977305   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:09.490038   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:11.490626   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:13.990603   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:12.534647   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.535213   69408 main.go:141] libmachine: (embed-certs-311319) Found IP for machine: 192.168.61.21
	I0924 19:47:12.535249   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has current primary IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.535259   69408 main.go:141] libmachine: (embed-certs-311319) Reserving static IP address...
	I0924 19:47:12.535700   69408 main.go:141] libmachine: (embed-certs-311319) Reserved static IP address: 192.168.61.21
	I0924 19:47:12.535744   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "embed-certs-311319", mac: "52:54:00:2d:97:73", ip: "192.168.61.21"} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.535759   69408 main.go:141] libmachine: (embed-certs-311319) Waiting for SSH to be available...
	I0924 19:47:12.535820   69408 main.go:141] libmachine: (embed-certs-311319) DBG | skip adding static IP to network mk-embed-certs-311319 - found existing host DHCP lease matching {name: "embed-certs-311319", mac: "52:54:00:2d:97:73", ip: "192.168.61.21"}
	I0924 19:47:12.535851   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Getting to WaitForSSH function...
	I0924 19:47:12.538011   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.538313   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.538336   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.538473   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Using SSH client type: external
	I0924 19:47:12.538500   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Using SSH private key: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa (-rw-------)
	I0924 19:47:12.538538   69408 main.go:141] libmachine: (embed-certs-311319) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 19:47:12.538558   69408 main.go:141] libmachine: (embed-certs-311319) DBG | About to run SSH command:
	I0924 19:47:12.538634   69408 main.go:141] libmachine: (embed-certs-311319) DBG | exit 0
	I0924 19:47:12.662787   69408 main.go:141] libmachine: (embed-certs-311319) DBG | SSH cmd err, output: <nil>: 
	I0924 19:47:12.663130   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetConfigRaw
	I0924 19:47:12.663829   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:12.666266   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.666707   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.666734   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.666985   69408 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/config.json ...
	I0924 19:47:12.667187   69408 machine.go:93] provisionDockerMachine start ...
	I0924 19:47:12.667205   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:12.667397   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.669695   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.670024   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.670056   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.670152   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.670297   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.670460   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.670624   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.670793   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.671018   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.671033   69408 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 19:47:12.766763   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 19:47:12.766797   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.767074   69408 buildroot.go:166] provisioning hostname "embed-certs-311319"
	I0924 19:47:12.767103   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.767285   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.770003   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.770519   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.770538   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.770705   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.770934   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.771119   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.771237   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.771408   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.771554   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.771565   69408 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-311319 && echo "embed-certs-311319" | sudo tee /etc/hostname
	I0924 19:47:12.879608   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-311319
	
	I0924 19:47:12.879636   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.882136   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.882424   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.882467   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.882663   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:12.882866   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.883075   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:12.883235   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:12.883416   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:12.883583   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:12.883599   69408 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-311319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-311319/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-311319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 19:47:12.987554   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 19:47:12.987586   69408 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19700-3751/.minikube CaCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19700-3751/.minikube}
	I0924 19:47:12.987608   69408 buildroot.go:174] setting up certificates
	I0924 19:47:12.987618   69408 provision.go:84] configureAuth start
	I0924 19:47:12.987630   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetMachineName
	I0924 19:47:12.987918   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:12.990946   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.991378   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.991399   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.991554   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:12.993829   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.994193   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:12.994222   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:12.994349   69408 provision.go:143] copyHostCerts
	I0924 19:47:12.994410   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem, removing ...
	I0924 19:47:12.994420   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem
	I0924 19:47:12.994478   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/ca.pem (1078 bytes)
	I0924 19:47:12.994576   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem, removing ...
	I0924 19:47:12.994586   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem
	I0924 19:47:12.994609   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/cert.pem (1123 bytes)
	I0924 19:47:12.994663   69408 exec_runner.go:144] found /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem, removing ...
	I0924 19:47:12.994670   69408 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem
	I0924 19:47:12.994689   69408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19700-3751/.minikube/key.pem (1675 bytes)
	I0924 19:47:12.994734   69408 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem org=jenkins.embed-certs-311319 san=[127.0.0.1 192.168.61.21 embed-certs-311319 localhost minikube]
	I0924 19:47:13.255351   69408 provision.go:177] copyRemoteCerts
	I0924 19:47:13.255425   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 19:47:13.255452   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.257888   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.258200   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.258229   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.258359   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.258567   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.258746   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.258895   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.337835   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 19:47:13.360866   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 19:47:13.382703   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 19:47:13.404887   69408 provision.go:87] duration metric: took 417.256101ms to configureAuth
	I0924 19:47:13.404918   69408 buildroot.go:189] setting minikube options for container-runtime
	I0924 19:47:13.405088   69408 config.go:182] Loaded profile config "embed-certs-311319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:47:13.405156   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.407711   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.408005   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.408024   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.408215   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.408408   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.408558   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.408660   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.408798   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:13.408960   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:13.408975   69408 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 19:47:13.623776   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 19:47:13.623798   69408 machine.go:96] duration metric: took 956.599003ms to provisionDockerMachine
	I0924 19:47:13.623809   69408 start.go:293] postStartSetup for "embed-certs-311319" (driver="kvm2")
	I0924 19:47:13.623818   69408 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 19:47:13.623833   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.624139   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 19:47:13.624168   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.627101   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.627443   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.627463   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.627613   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.627790   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.627941   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.628087   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.705595   69408 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 19:47:13.709401   69408 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 19:47:13.709432   69408 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/addons for local assets ...
	I0924 19:47:13.709507   69408 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3751/.minikube/files for local assets ...
	I0924 19:47:13.709597   69408 filesync.go:149] local asset: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem -> 109492.pem in /etc/ssl/certs
	I0924 19:47:13.709717   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 19:47:13.718508   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:13.741537   69408 start.go:296] duration metric: took 117.71568ms for postStartSetup
	I0924 19:47:13.741586   69408 fix.go:56] duration metric: took 20.222309525s for fixHost
	I0924 19:47:13.741609   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.743935   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.744298   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.744319   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.744478   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.744665   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.744833   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.744950   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.745099   69408 main.go:141] libmachine: Using SSH client type: native
	I0924 19:47:13.745299   69408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0924 19:47:13.745310   69408 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 19:47:13.847189   69408 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727207233.821269327
	
	I0924 19:47:13.847206   69408 fix.go:216] guest clock: 1727207233.821269327
	I0924 19:47:13.847213   69408 fix.go:229] Guest: 2024-09-24 19:47:13.821269327 +0000 UTC Remote: 2024-09-24 19:47:13.741591139 +0000 UTC m=+352.627485562 (delta=79.678188ms)
	I0924 19:47:13.847230   69408 fix.go:200] guest clock delta is within tolerance: 79.678188ms
	I0924 19:47:13.847236   69408 start.go:83] releasing machines lock for "embed-certs-311319", held for 20.328002727s
	I0924 19:47:13.847252   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.847550   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:13.850207   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.850597   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.850624   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.850777   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851225   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851382   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:47:13.851459   69408 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 19:47:13.851520   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.851583   69408 ssh_runner.go:195] Run: cat /version.json
	I0924 19:47:13.851606   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:47:13.854077   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854214   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854354   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.854378   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854508   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.854615   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:13.854646   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:13.854666   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.854852   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.854855   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:47:13.855020   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.855030   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:47:13.855168   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:47:13.855279   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:47:13.927108   69408 ssh_runner.go:195] Run: systemctl --version
	I0924 19:47:13.948600   69408 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 19:47:14.091427   69408 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 19:47:14.097911   69408 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 19:47:14.097970   69408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 19:47:14.113345   69408 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 19:47:14.113367   69408 start.go:495] detecting cgroup driver to use...
	I0924 19:47:14.113418   69408 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 19:47:14.129953   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 19:47:14.143732   69408 docker.go:217] disabling cri-docker service (if available) ...
	I0924 19:47:14.143792   69408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 19:47:14.156986   69408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 19:47:14.170235   69408 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 19:47:14.280973   69408 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 19:47:14.431584   69408 docker.go:233] disabling docker service ...
	I0924 19:47:14.431652   69408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 19:47:14.447042   69408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 19:47:14.458811   69408 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 19:47:14.571325   69408 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 19:47:14.685951   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 19:47:14.698947   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 19:47:14.716153   69408 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 19:47:14.716210   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.725659   69408 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 19:47:14.725711   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.734814   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.744087   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.753666   69408 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 19:47:14.763166   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.772502   69408 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.787890   69408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 19:47:14.797483   69408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 19:47:14.805769   69408 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 19:47:14.805822   69408 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 19:47:14.817290   69408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 19:47:14.827023   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:14.954141   69408 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 19:47:15.033256   69408 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 19:47:15.033336   69408 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 19:47:15.038070   69408 start.go:563] Will wait 60s for crictl version
	I0924 19:47:15.038118   69408 ssh_runner.go:195] Run: which crictl
	I0924 19:47:15.041588   69408 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 19:47:15.081812   69408 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 19:47:15.081922   69408 ssh_runner.go:195] Run: crio --version
	I0924 19:47:15.108570   69408 ssh_runner.go:195] Run: crio --version
	I0924 19:47:15.137432   69408 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 19:47:15.138786   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetIP
	I0924 19:47:15.141328   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:15.141693   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:47:15.141723   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:47:15.141867   69408 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0924 19:47:15.145512   69408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:15.156995   69408 kubeadm.go:883] updating cluster {Name:embed-certs-311319 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 19:47:15.157095   69408 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 19:47:15.157142   69408 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:47:15.189861   69408 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 19:47:15.189919   69408 ssh_runner.go:195] Run: which lz4
	I0924 19:47:15.193364   69408 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 19:47:15.196961   69408 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 19:47:15.196986   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 19:47:11.830448   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:12.330373   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:12.830050   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.329571   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.829489   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:14.329728   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:14.829674   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:15.329673   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:15.829570   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:16.330102   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:13.978164   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:15.978363   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:15.990970   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:18.491272   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:16.371583   69408 crio.go:462] duration metric: took 1.178253814s to copy over tarball
	I0924 19:47:16.371663   69408 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 19:47:18.358246   69408 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.986557839s)
	I0924 19:47:18.358276   69408 crio.go:469] duration metric: took 1.986666343s to extract the tarball
	I0924 19:47:18.358285   69408 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 19:47:18.393855   69408 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 19:47:18.442985   69408 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 19:47:18.443011   69408 cache_images.go:84] Images are preloaded, skipping loading
	I0924 19:47:18.443020   69408 kubeadm.go:934] updating node { 192.168.61.21 8443 v1.31.1 crio true true} ...
	I0924 19:47:18.443144   69408 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-311319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 19:47:18.443225   69408 ssh_runner.go:195] Run: crio config
	I0924 19:47:18.495010   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:47:18.495034   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:18.495045   69408 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 19:47:18.495071   69408 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.21 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-311319 NodeName:embed-certs-311319 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 19:47:18.495201   69408 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-311319"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 19:47:18.495259   69408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 19:47:18.504758   69408 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 19:47:18.504837   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 19:47:18.513817   69408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 19:47:18.529890   69408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 19:47:18.545915   69408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0924 19:47:18.561627   69408 ssh_runner.go:195] Run: grep 192.168.61.21	control-plane.minikube.internal$ /etc/hosts
	I0924 19:47:18.565041   69408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 19:47:18.576059   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:47:18.686482   69408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:47:18.703044   69408 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319 for IP: 192.168.61.21
	I0924 19:47:18.703074   69408 certs.go:194] generating shared ca certs ...
	I0924 19:47:18.703095   69408 certs.go:226] acquiring lock for ca certs: {Name:mkf91a44a1711af5d6a0cffa7b91f5cc433daa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:47:18.703278   69408 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key
	I0924 19:47:18.703317   69408 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key
	I0924 19:47:18.703327   69408 certs.go:256] generating profile certs ...
	I0924 19:47:18.703417   69408 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/client.key
	I0924 19:47:18.703477   69408 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.key.8f14491f
	I0924 19:47:18.703510   69408 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.key
	I0924 19:47:18.703649   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem (1338 bytes)
	W0924 19:47:18.703703   69408 certs.go:480] ignoring /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949_empty.pem, impossibly tiny 0 bytes
	I0924 19:47:18.703715   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca-key.pem (1679 bytes)
	I0924 19:47:18.703740   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/ca.pem (1078 bytes)
	I0924 19:47:18.703771   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/cert.pem (1123 bytes)
	I0924 19:47:18.703803   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/certs/key.pem (1675 bytes)
	I0924 19:47:18.703843   69408 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem (1708 bytes)
	I0924 19:47:18.704668   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 19:47:18.731187   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 19:47:18.762416   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 19:47:18.793841   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 19:47:18.822091   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0924 19:47:18.854506   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 19:47:18.880416   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 19:47:18.903863   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/embed-certs-311319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 19:47:18.926078   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/certs/10949.pem --> /usr/share/ca-certificates/10949.pem (1338 bytes)
	I0924 19:47:18.947455   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/ssl/certs/109492.pem --> /usr/share/ca-certificates/109492.pem (1708 bytes)
	I0924 19:47:18.968237   69408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19700-3751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 19:47:18.990346   69408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 19:47:19.006286   69408 ssh_runner.go:195] Run: openssl version
	I0924 19:47:19.011968   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10949.pem && ln -fs /usr/share/ca-certificates/10949.pem /etc/ssl/certs/10949.pem"
	I0924 19:47:19.021631   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.025859   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:37 /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.025914   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10949.pem
	I0924 19:47:19.030999   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10949.pem /etc/ssl/certs/51391683.0"
	I0924 19:47:19.041265   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109492.pem && ln -fs /usr/share/ca-certificates/109492.pem /etc/ssl/certs/109492.pem"
	I0924 19:47:19.050994   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.054763   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:37 /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.054810   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109492.pem
	I0924 19:47:19.059873   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109492.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 19:47:19.069694   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 19:47:19.079194   69408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.083185   69408 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.083236   69408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 19:47:19.088369   69408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 19:47:19.098719   69408 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 19:47:19.102935   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 19:47:19.108364   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 19:47:19.113724   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 19:47:19.119556   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 19:47:19.125014   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 19:47:19.130466   69408 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 19:47:19.135718   69408 kubeadm.go:392] StartCluster: {Name:embed-certs-311319 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-311319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 19:47:19.135786   69408 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 19:47:19.135826   69408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:19.171585   69408 cri.go:89] found id: ""
	I0924 19:47:19.171664   69408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 19:47:19.181296   69408 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 19:47:19.181315   69408 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 19:47:19.181363   69408 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 19:47:19.191113   69408 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:47:19.192148   69408 kubeconfig.go:125] found "embed-certs-311319" server: "https://192.168.61.21:8443"
	I0924 19:47:19.194115   69408 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 19:47:19.203274   69408 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.21
	I0924 19:47:19.203308   69408 kubeadm.go:1160] stopping kube-system containers ...
	I0924 19:47:19.203319   69408 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 19:47:19.203372   69408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 19:47:19.249594   69408 cri.go:89] found id: ""
	I0924 19:47:19.249678   69408 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 19:47:19.268296   69408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:47:19.277151   69408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:47:19.277169   69408 kubeadm.go:157] found existing configuration files:
	
	I0924 19:47:19.277206   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:47:19.285488   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:47:19.285550   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:47:19.294995   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:47:19.303613   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:47:19.303669   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:47:19.312919   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:47:19.321717   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:47:19.321778   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:47:19.330321   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:47:19.342441   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:47:19.342497   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:47:19.352505   69408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:47:19.361457   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:19.463310   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.242073   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.431443   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.500079   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:20.575802   69408 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:47:20.575904   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:21.076353   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:16.829867   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.329440   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.830132   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:18.329512   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:18.829524   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:19.329716   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:19.829496   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:20.329702   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:20.830155   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:21.330292   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:17.979442   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:20.478202   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:22.478336   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:20.491568   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:22.991057   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:21.576940   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.076696   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.576235   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.594920   69408 api_server.go:72] duration metric: took 2.019101558s to wait for apiserver process to appear ...
	I0924 19:47:22.594944   69408 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:47:22.594965   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:22.595379   69408 api_server.go:269] stopped: https://192.168.61.21:8443/healthz: Get "https://192.168.61.21:8443/healthz": dial tcp 192.168.61.21:8443: connect: connection refused
	I0924 19:47:23.095005   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.467947   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:47:25.467974   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:47:25.467988   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.515819   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 19:47:25.515851   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 19:47:25.596001   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:25.602276   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:25.602314   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:26.095918   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:26.100666   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:26.100698   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:21.829987   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.329630   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:22.830041   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:23.330430   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:23.829696   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:24.329494   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:24.830212   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:25.330402   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:25.829827   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:26.329541   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:26.595784   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:26.601821   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 19:47:26.601861   69408 api_server.go:103] status: https://192.168.61.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 19:47:27.095137   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:47:27.099164   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 200:
	ok
	I0924 19:47:27.106625   69408 api_server.go:141] control plane version: v1.31.1
	I0924 19:47:27.106652   69408 api_server.go:131] duration metric: took 4.511701512s to wait for apiserver health ...
	I0924 19:47:27.106661   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:47:27.106668   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:47:27.108430   69408 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:47:24.479088   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:26.978509   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:25.490325   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:27.990308   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:27.109830   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:47:27.119442   69408 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:47:27.139119   69408 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:47:27.150029   69408 system_pods.go:59] 8 kube-system pods found
	I0924 19:47:27.150060   69408 system_pods.go:61] "coredns-7c65d6cfc9-wwzps" [5d53dda1-bd41-40f4-8e01-e3808a6e17e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 19:47:27.150067   69408 system_pods.go:61] "etcd-embed-certs-311319" [899d3105-b565-4c9c-8b8e-fa524ba8bee8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 19:47:27.150076   69408 system_pods.go:61] "kube-apiserver-embed-certs-311319" [45909a95-dafd-436a-b1c9-4b16a7cb6ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 19:47:27.150083   69408 system_pods.go:61] "kube-controller-manager-embed-certs-311319" [e122c12d-8ad6-472d-9339-a9751a6108a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 19:47:27.150089   69408 system_pods.go:61] "kube-proxy-qk749" [ae8c6989-5de4-41bd-9098-1924532b7ff8] Running
	I0924 19:47:27.150094   69408 system_pods.go:61] "kube-scheduler-embed-certs-311319" [2f7427ff-479c-4f36-b27f-cfbf76e26201] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 19:47:27.150103   69408 system_pods.go:61] "metrics-server-6867b74b74-jfrhm" [b0e8ee4e-c2c6-4379-85ca-805cd3ce6371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:47:27.150107   69408 system_pods.go:61] "storage-provisioner" [b61b6e53-23ad-4cee-8eaa-8195dc6e67b8] Running
	I0924 19:47:27.150115   69408 system_pods.go:74] duration metric: took 10.980516ms to wait for pod list to return data ...
	I0924 19:47:27.150123   69408 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:47:27.154040   69408 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:47:27.154061   69408 node_conditions.go:123] node cpu capacity is 2
	I0924 19:47:27.154070   69408 node_conditions.go:105] duration metric: took 3.94208ms to run NodePressure ...
	I0924 19:47:27.154083   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 19:47:27.413841   69408 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 19:47:27.419186   69408 kubeadm.go:739] kubelet initialised
	I0924 19:47:27.419208   69408 kubeadm.go:740] duration metric: took 5.345194ms waiting for restarted kubelet to initialise ...
	I0924 19:47:27.419217   69408 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:47:27.424725   69408 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.429510   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.429529   69408 pod_ready.go:82] duration metric: took 4.780829ms for pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.429537   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "coredns-7c65d6cfc9-wwzps" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.429542   69408 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.434176   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "etcd-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.434200   69408 pod_ready.go:82] duration metric: took 4.647781ms for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.434211   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "etcd-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.434218   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.438323   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.438352   69408 pod_ready.go:82] duration metric: took 4.121619ms for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.438365   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.438377   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.543006   69408 pod_ready.go:98] node "embed-certs-311319" hosting pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.543032   69408 pod_ready.go:82] duration metric: took 104.641326ms for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	E0924 19:47:27.543046   69408 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-311319" hosting pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-311319" has status "Ready":"False"
	I0924 19:47:27.543053   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qk749" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.942331   69408 pod_ready.go:93] pod "kube-proxy-qk749" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:27.942351   69408 pod_ready.go:82] duration metric: took 399.288777ms for pod "kube-proxy-qk749" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:27.942360   69408 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:29.955819   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:26.830122   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:27.329632   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:27.829858   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:28.329762   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:28.829476   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.330221   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.829642   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:30.329491   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:30.830098   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:31.329499   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:29.479174   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:31.979161   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:30.490043   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:32.490237   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:32.447718   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:34.948011   69408 pod_ready.go:103] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:35.948500   69408 pod_ready.go:93] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:47:35.948525   69408 pod_ready.go:82] duration metric: took 8.006158098s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:35.948534   69408 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" ...
	I0924 19:47:31.830201   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:32.330017   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:32.829654   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:33.329718   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:33.830007   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.329683   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.829441   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:35.329848   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:35.829899   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:36.330437   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:34.478344   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.979370   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:34.490525   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.493495   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:38.990185   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:37.955025   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:39.958725   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:36.830372   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:37.330124   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:37.829745   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:38.329476   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:38.830138   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.329657   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.829850   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:40.330083   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:40.829903   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:41.329650   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:39.478317   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:41.978220   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:40.990288   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:42.990812   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:42.455130   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:44.954001   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:41.829413   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:42.329658   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:42.829718   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:43.330413   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:43.830374   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.329633   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.829479   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:45.330059   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:45.829818   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:46.330216   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:44.478335   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.977745   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:45.489604   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:47.490196   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.954193   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:48.955025   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:46.830337   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:47.330269   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:47.829573   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:48.329440   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:48.829923   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.329742   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.829771   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:50.329793   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:50.829379   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:51.329385   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:49.477310   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.977800   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:49.990388   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:52.490087   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.453967   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:53.454464   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:55.454863   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:51.829989   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:52.329456   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:52.830395   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:53.330348   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:53.829385   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.329667   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.830290   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:55.330430   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:55.829909   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:56.330041   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:54.477481   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.978407   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:54.490209   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.989867   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:58.990813   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:57.954303   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:00.454466   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:47:56.829842   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:57.329904   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:57.829402   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:58.329848   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:58.830403   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.330062   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.829904   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:00.329651   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:00.829451   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:01.330427   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:47:59.479270   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.978099   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.490292   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:03.490598   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:02.955021   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:05.455302   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:01.830104   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:02.330085   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:02.830241   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:02.830313   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:02.863389   70152 cri.go:89] found id: ""
	I0924 19:48:02.863421   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.863432   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:02.863440   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:02.863501   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:02.903587   70152 cri.go:89] found id: ""
	I0924 19:48:02.903615   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.903627   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:02.903634   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:02.903691   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:02.936090   70152 cri.go:89] found id: ""
	I0924 19:48:02.936117   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.936132   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:02.936138   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:02.936197   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:02.970010   70152 cri.go:89] found id: ""
	I0924 19:48:02.970034   70152 logs.go:276] 0 containers: []
	W0924 19:48:02.970042   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:02.970047   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:02.970094   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:03.005123   70152 cri.go:89] found id: ""
	I0924 19:48:03.005146   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.005156   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:03.005164   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:03.005224   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:03.037142   70152 cri.go:89] found id: ""
	I0924 19:48:03.037185   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.037214   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:03.037223   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:03.037289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:03.071574   70152 cri.go:89] found id: ""
	I0924 19:48:03.071605   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.071616   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:03.071644   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:03.071710   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:03.101682   70152 cri.go:89] found id: ""
	I0924 19:48:03.101710   70152 logs.go:276] 0 containers: []
	W0924 19:48:03.101718   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:03.101727   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:03.101737   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:03.145955   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:03.145982   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:03.194495   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:03.194531   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:03.207309   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:03.207344   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:03.318709   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:03.318736   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:03.318751   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:05.897472   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:05.910569   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:05.910633   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:05.972008   70152 cri.go:89] found id: ""
	I0924 19:48:05.972047   70152 logs.go:276] 0 containers: []
	W0924 19:48:05.972059   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:05.972066   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:05.972128   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:06.021928   70152 cri.go:89] found id: ""
	I0924 19:48:06.021954   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.021961   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:06.021967   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:06.022018   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:06.054871   70152 cri.go:89] found id: ""
	I0924 19:48:06.054910   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.054919   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:06.054924   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:06.054979   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:06.087218   70152 cri.go:89] found id: ""
	I0924 19:48:06.087242   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.087253   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:06.087261   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:06.087312   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:06.120137   70152 cri.go:89] found id: ""
	I0924 19:48:06.120162   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.120170   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:06.120176   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:06.120222   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:06.150804   70152 cri.go:89] found id: ""
	I0924 19:48:06.150842   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.150854   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:06.150862   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:06.150911   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:06.189829   70152 cri.go:89] found id: ""
	I0924 19:48:06.189856   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.189864   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:06.189870   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:06.189920   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:06.224712   70152 cri.go:89] found id: ""
	I0924 19:48:06.224739   70152 logs.go:276] 0 containers: []
	W0924 19:48:06.224747   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:06.224755   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:06.224769   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:06.290644   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:06.290669   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:06.290681   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:06.369393   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:06.369427   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:06.404570   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:06.404601   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:06.456259   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:06.456288   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:04.478140   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:06.478544   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:05.991344   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:08.489768   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:07.954351   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:10.453427   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:08.969378   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:08.982058   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:08.982129   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:09.015453   70152 cri.go:89] found id: ""
	I0924 19:48:09.015475   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.015484   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:09.015489   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:09.015535   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:09.046308   70152 cri.go:89] found id: ""
	I0924 19:48:09.046332   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.046343   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:09.046350   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:09.046412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:09.077263   70152 cri.go:89] found id: ""
	I0924 19:48:09.077296   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.077308   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:09.077315   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:09.077373   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:09.109224   70152 cri.go:89] found id: ""
	I0924 19:48:09.109255   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.109267   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:09.109274   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:09.109342   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:09.144346   70152 cri.go:89] found id: ""
	I0924 19:48:09.144370   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.144378   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:09.144383   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:09.144434   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:09.175798   70152 cri.go:89] found id: ""
	I0924 19:48:09.175827   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.175843   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:09.175854   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:09.175923   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:09.211912   70152 cri.go:89] found id: ""
	I0924 19:48:09.211935   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.211942   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:09.211948   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:09.211996   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:09.242068   70152 cri.go:89] found id: ""
	I0924 19:48:09.242099   70152 logs.go:276] 0 containers: []
	W0924 19:48:09.242110   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:09.242121   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:09.242134   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:09.306677   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:09.306696   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:09.306707   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:09.384544   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:09.384598   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:09.419555   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:09.419583   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:09.470699   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:09.470731   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:08.977847   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:11.477629   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:10.491124   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:12.990300   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:12.455219   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:14.455548   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:11.984355   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:11.997823   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:11.997879   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:12.029976   70152 cri.go:89] found id: ""
	I0924 19:48:12.030009   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.030021   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:12.030041   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:12.030187   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:12.061131   70152 cri.go:89] found id: ""
	I0924 19:48:12.061157   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.061165   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:12.061170   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:12.061223   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:12.091952   70152 cri.go:89] found id: ""
	I0924 19:48:12.091978   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.091986   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:12.091992   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:12.092039   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:12.127561   70152 cri.go:89] found id: ""
	I0924 19:48:12.127586   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.127597   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:12.127604   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:12.127688   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:12.157342   70152 cri.go:89] found id: ""
	I0924 19:48:12.157363   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.157371   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:12.157377   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:12.157449   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:12.188059   70152 cri.go:89] found id: ""
	I0924 19:48:12.188090   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.188101   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:12.188109   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:12.188163   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:12.222357   70152 cri.go:89] found id: ""
	I0924 19:48:12.222380   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.222388   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:12.222398   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:12.222456   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:12.252715   70152 cri.go:89] found id: ""
	I0924 19:48:12.252736   70152 logs.go:276] 0 containers: []
	W0924 19:48:12.252743   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:12.252751   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:12.252761   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:12.302913   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:12.302943   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:12.315812   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:12.315840   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:12.392300   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:12.392322   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:12.392333   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:12.475042   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:12.475081   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:15.013852   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:15.026515   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:15.026586   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:15.057967   70152 cri.go:89] found id: ""
	I0924 19:48:15.057993   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.058001   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:15.058008   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:15.058063   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:15.092822   70152 cri.go:89] found id: ""
	I0924 19:48:15.092852   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.092860   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:15.092866   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:15.092914   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:15.127847   70152 cri.go:89] found id: ""
	I0924 19:48:15.127875   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.127884   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:15.127889   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:15.127941   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:15.159941   70152 cri.go:89] found id: ""
	I0924 19:48:15.159967   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.159975   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:15.159981   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:15.160035   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:15.192384   70152 cri.go:89] found id: ""
	I0924 19:48:15.192411   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.192422   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:15.192428   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:15.192481   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:15.225446   70152 cri.go:89] found id: ""
	I0924 19:48:15.225472   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.225482   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:15.225488   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:15.225546   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:15.257292   70152 cri.go:89] found id: ""
	I0924 19:48:15.257312   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.257320   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:15.257326   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:15.257377   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:15.288039   70152 cri.go:89] found id: ""
	I0924 19:48:15.288073   70152 logs.go:276] 0 containers: []
	W0924 19:48:15.288085   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:15.288096   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:15.288110   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:15.300593   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:15.300619   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:15.365453   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:15.365482   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:15.365497   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:15.442405   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:15.442440   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:15.481003   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:15.481033   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:13.978638   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.477631   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:14.990464   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.991280   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:16.954405   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:18.955055   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:18.031802   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:18.044013   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:18.044070   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:18.076333   70152 cri.go:89] found id: ""
	I0924 19:48:18.076357   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.076365   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:18.076371   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:18.076421   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:18.110333   70152 cri.go:89] found id: ""
	I0924 19:48:18.110367   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.110379   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:18.110386   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:18.110457   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:18.142730   70152 cri.go:89] found id: ""
	I0924 19:48:18.142755   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.142763   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:18.142769   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:18.142848   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:18.174527   70152 cri.go:89] found id: ""
	I0924 19:48:18.174551   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.174561   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:18.174568   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:18.174623   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:18.213873   70152 cri.go:89] found id: ""
	I0924 19:48:18.213904   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.213916   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:18.213923   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:18.214019   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:18.247037   70152 cri.go:89] found id: ""
	I0924 19:48:18.247069   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.247079   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:18.247087   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:18.247167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:18.278275   70152 cri.go:89] found id: ""
	I0924 19:48:18.278302   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.278313   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:18.278319   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:18.278377   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:18.311651   70152 cri.go:89] found id: ""
	I0924 19:48:18.311679   70152 logs.go:276] 0 containers: []
	W0924 19:48:18.311690   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:18.311702   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:18.311714   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:18.365113   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:18.365144   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:18.378675   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:18.378702   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:18.450306   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:18.450339   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:18.450353   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:18.529373   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:18.529420   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:21.065169   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:21.077517   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:21.077579   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:21.112639   70152 cri.go:89] found id: ""
	I0924 19:48:21.112663   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.112671   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:21.112677   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:21.112729   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:21.144587   70152 cri.go:89] found id: ""
	I0924 19:48:21.144608   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.144616   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:21.144625   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:21.144675   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:21.175675   70152 cri.go:89] found id: ""
	I0924 19:48:21.175697   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.175705   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:21.175710   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:21.175760   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:21.207022   70152 cri.go:89] found id: ""
	I0924 19:48:21.207044   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.207053   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:21.207058   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:21.207108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:21.238075   70152 cri.go:89] found id: ""
	I0924 19:48:21.238106   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.238118   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:21.238125   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:21.238188   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:21.269998   70152 cri.go:89] found id: ""
	I0924 19:48:21.270030   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.270040   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:21.270048   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:21.270108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:21.301274   70152 cri.go:89] found id: ""
	I0924 19:48:21.301303   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.301315   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:21.301323   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:21.301389   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:21.332082   70152 cri.go:89] found id: ""
	I0924 19:48:21.332107   70152 logs.go:276] 0 containers: []
	W0924 19:48:21.332115   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:21.332123   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:21.332133   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:21.383713   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:21.383759   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:21.396926   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:21.396950   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:21.465280   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:21.465306   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:21.465321   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:18.477865   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:20.978484   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:19.491021   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.993922   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.454663   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:23.455041   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:25.954094   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:21.544724   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:21.544760   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:24.083632   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:24.095853   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:24.095909   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:24.126692   70152 cri.go:89] found id: ""
	I0924 19:48:24.126718   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.126732   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:24.126739   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:24.126794   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:24.157451   70152 cri.go:89] found id: ""
	I0924 19:48:24.157478   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.157490   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:24.157498   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:24.157548   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:24.188313   70152 cri.go:89] found id: ""
	I0924 19:48:24.188340   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.188351   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:24.188359   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:24.188406   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:24.218240   70152 cri.go:89] found id: ""
	I0924 19:48:24.218271   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.218283   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:24.218291   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:24.218348   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:24.249281   70152 cri.go:89] found id: ""
	I0924 19:48:24.249313   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.249324   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:24.249331   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:24.249391   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:24.280160   70152 cri.go:89] found id: ""
	I0924 19:48:24.280182   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.280189   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:24.280194   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:24.280246   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:24.310699   70152 cri.go:89] found id: ""
	I0924 19:48:24.310726   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.310735   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:24.310740   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:24.310792   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:24.346673   70152 cri.go:89] found id: ""
	I0924 19:48:24.346703   70152 logs.go:276] 0 containers: []
	W0924 19:48:24.346715   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:24.346725   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:24.346738   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:24.396068   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:24.396100   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:24.408987   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:24.409014   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:24.477766   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:24.477792   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:24.477805   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:24.556507   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:24.556539   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:23.477283   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:25.477770   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.478124   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:24.491040   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:26.990109   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.954634   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:29.954918   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:27.099161   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:27.110953   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:27.111027   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:27.143812   70152 cri.go:89] found id: ""
	I0924 19:48:27.143838   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.143846   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:27.143852   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:27.143909   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:27.173741   70152 cri.go:89] found id: ""
	I0924 19:48:27.173766   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.173775   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:27.173780   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:27.173835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:27.203089   70152 cri.go:89] found id: ""
	I0924 19:48:27.203118   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.203128   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:27.203135   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:27.203197   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:27.234206   70152 cri.go:89] found id: ""
	I0924 19:48:27.234232   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.234240   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:27.234247   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:27.234298   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:27.265173   70152 cri.go:89] found id: ""
	I0924 19:48:27.265199   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.265207   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:27.265213   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:27.265274   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:27.294683   70152 cri.go:89] found id: ""
	I0924 19:48:27.294711   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.294722   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:27.294737   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:27.294800   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:27.327766   70152 cri.go:89] found id: ""
	I0924 19:48:27.327796   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.327804   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:27.327810   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:27.327867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:27.358896   70152 cri.go:89] found id: ""
	I0924 19:48:27.358922   70152 logs.go:276] 0 containers: []
	W0924 19:48:27.358932   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:27.358943   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:27.358958   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:27.407245   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:27.407281   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:27.420301   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:27.420333   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:27.483150   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:27.483175   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:27.483190   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:27.558952   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:27.558988   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:30.094672   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:30.107997   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:30.108061   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:30.141210   70152 cri.go:89] found id: ""
	I0924 19:48:30.141238   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.141248   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:30.141256   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:30.141319   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:30.173799   70152 cri.go:89] found id: ""
	I0924 19:48:30.173825   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.173833   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:30.173839   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:30.173900   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:30.206653   70152 cri.go:89] found id: ""
	I0924 19:48:30.206676   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.206684   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:30.206690   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:30.206739   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:30.245268   70152 cri.go:89] found id: ""
	I0924 19:48:30.245296   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.245351   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:30.245363   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:30.245424   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:30.277515   70152 cri.go:89] found id: ""
	I0924 19:48:30.277550   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.277570   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:30.277578   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:30.277646   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:30.309533   70152 cri.go:89] found id: ""
	I0924 19:48:30.309556   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.309564   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:30.309576   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:30.309641   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:30.342113   70152 cri.go:89] found id: ""
	I0924 19:48:30.342133   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.342140   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:30.342146   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:30.342204   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:30.377786   70152 cri.go:89] found id: ""
	I0924 19:48:30.377818   70152 logs.go:276] 0 containers: []
	W0924 19:48:30.377827   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:30.377835   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:30.377846   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:30.429612   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:30.429660   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:30.442864   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:30.442892   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:30.508899   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:30.508917   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:30.508928   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:30.585285   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:30.585316   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:29.978453   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:32.478565   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:29.489398   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:31.490231   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:33.490730   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:32.454775   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:34.455023   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:33.125617   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:33.137771   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:33.137847   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:33.169654   70152 cri.go:89] found id: ""
	I0924 19:48:33.169684   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.169694   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:33.169703   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:33.169769   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:33.205853   70152 cri.go:89] found id: ""
	I0924 19:48:33.205877   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.205884   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:33.205890   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:33.205947   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:33.239008   70152 cri.go:89] found id: ""
	I0924 19:48:33.239037   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.239048   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:33.239056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:33.239114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:33.269045   70152 cri.go:89] found id: ""
	I0924 19:48:33.269077   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.269088   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:33.269096   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:33.269158   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:33.298553   70152 cri.go:89] found id: ""
	I0924 19:48:33.298583   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.298594   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:33.298602   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:33.298663   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:33.329077   70152 cri.go:89] found id: ""
	I0924 19:48:33.329103   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.329114   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:33.329122   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:33.329181   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:33.361366   70152 cri.go:89] found id: ""
	I0924 19:48:33.361397   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.361408   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:33.361416   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:33.361465   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:33.394899   70152 cri.go:89] found id: ""
	I0924 19:48:33.394941   70152 logs.go:276] 0 containers: []
	W0924 19:48:33.394952   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:33.394964   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:33.394978   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:33.446878   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:33.446917   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:33.460382   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:33.460408   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:33.530526   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:33.530546   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:33.530563   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:33.610520   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:33.610559   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:36.152137   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:36.165157   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:36.165225   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:36.196113   70152 cri.go:89] found id: ""
	I0924 19:48:36.196142   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.196151   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:36.196159   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:36.196223   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:36.230743   70152 cri.go:89] found id: ""
	I0924 19:48:36.230770   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.230779   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:36.230786   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:36.230870   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:36.263401   70152 cri.go:89] found id: ""
	I0924 19:48:36.263430   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.263439   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:36.263444   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:36.263492   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:36.298958   70152 cri.go:89] found id: ""
	I0924 19:48:36.298982   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.298991   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:36.298996   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:36.299053   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:36.337604   70152 cri.go:89] found id: ""
	I0924 19:48:36.337636   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.337647   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:36.337654   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:36.337717   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:36.368707   70152 cri.go:89] found id: ""
	I0924 19:48:36.368738   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.368749   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:36.368763   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:36.368833   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:36.400169   70152 cri.go:89] found id: ""
	I0924 19:48:36.400194   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.400204   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:36.400212   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:36.400277   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:36.430959   70152 cri.go:89] found id: ""
	I0924 19:48:36.430987   70152 logs.go:276] 0 containers: []
	W0924 19:48:36.430994   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:36.431003   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:36.431015   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:48:34.478813   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:36.978477   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:35.991034   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:38.489705   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:36.954351   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:39.455405   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	W0924 19:48:36.508356   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:36.508381   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:36.508392   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:36.589376   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:36.589411   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:36.629423   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:36.629453   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:36.679281   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:36.679313   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:39.193627   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:39.207486   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:39.207564   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:39.239864   70152 cri.go:89] found id: ""
	I0924 19:48:39.239888   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.239897   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:39.239902   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:39.239950   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:39.273596   70152 cri.go:89] found id: ""
	I0924 19:48:39.273622   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.273630   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:39.273635   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:39.273685   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:39.305659   70152 cri.go:89] found id: ""
	I0924 19:48:39.305685   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.305696   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:39.305703   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:39.305762   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:39.338060   70152 cri.go:89] found id: ""
	I0924 19:48:39.338091   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.338103   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:39.338110   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:39.338167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:39.369652   70152 cri.go:89] found id: ""
	I0924 19:48:39.369680   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.369688   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:39.369694   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:39.369757   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:39.406342   70152 cri.go:89] found id: ""
	I0924 19:48:39.406365   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.406373   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:39.406379   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:39.406428   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:39.437801   70152 cri.go:89] found id: ""
	I0924 19:48:39.437824   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.437832   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:39.437838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:39.437892   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:39.476627   70152 cri.go:89] found id: ""
	I0924 19:48:39.476651   70152 logs.go:276] 0 containers: []
	W0924 19:48:39.476662   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:39.476672   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:39.476685   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:39.528302   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:39.528332   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:39.540968   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:39.540999   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:39.606690   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:39.606716   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:39.606733   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:39.689060   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:39.689101   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:39.478198   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:41.478531   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:40.489969   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:42.491022   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:41.954586   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:44.454898   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:42.225445   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:42.238188   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:42.238262   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:42.270077   70152 cri.go:89] found id: ""
	I0924 19:48:42.270107   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.270117   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:42.270127   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:42.270189   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:42.301231   70152 cri.go:89] found id: ""
	I0924 19:48:42.301253   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.301261   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:42.301266   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:42.301311   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:42.331554   70152 cri.go:89] found id: ""
	I0924 19:48:42.331586   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.331594   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:42.331602   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:42.331662   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:42.364673   70152 cri.go:89] found id: ""
	I0924 19:48:42.364696   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.364704   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:42.364710   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:42.364755   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:42.396290   70152 cri.go:89] found id: ""
	I0924 19:48:42.396320   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.396331   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:42.396339   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:42.396400   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:42.427249   70152 cri.go:89] found id: ""
	I0924 19:48:42.427277   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.427287   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:42.427295   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:42.427356   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:42.462466   70152 cri.go:89] found id: ""
	I0924 19:48:42.462491   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.462499   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:42.462504   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:42.462557   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:42.496774   70152 cri.go:89] found id: ""
	I0924 19:48:42.496797   70152 logs.go:276] 0 containers: []
	W0924 19:48:42.496805   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:42.496813   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:42.496825   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:42.569996   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:42.570024   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:42.570040   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:42.646881   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:42.646913   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:42.687089   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:42.687112   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:42.739266   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:42.739303   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:45.254320   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:45.266332   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:45.266404   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:45.296893   70152 cri.go:89] found id: ""
	I0924 19:48:45.296923   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.296933   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:45.296940   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:45.297003   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:45.328599   70152 cri.go:89] found id: ""
	I0924 19:48:45.328628   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.328639   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:45.328647   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:45.328704   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:45.361362   70152 cri.go:89] found id: ""
	I0924 19:48:45.361394   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.361404   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:45.361414   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:45.361475   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:45.395296   70152 cri.go:89] found id: ""
	I0924 19:48:45.395341   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.395352   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:45.395360   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:45.395424   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:45.430070   70152 cri.go:89] found id: ""
	I0924 19:48:45.430092   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.430100   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:45.430106   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:45.430151   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:45.463979   70152 cri.go:89] found id: ""
	I0924 19:48:45.464005   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.464015   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:45.464023   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:45.464085   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:45.512245   70152 cri.go:89] found id: ""
	I0924 19:48:45.512276   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.512286   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:45.512293   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:45.512353   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:45.544854   70152 cri.go:89] found id: ""
	I0924 19:48:45.544882   70152 logs.go:276] 0 containers: []
	W0924 19:48:45.544891   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:45.544902   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:45.544915   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:45.580352   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:45.580390   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:45.630992   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:45.631025   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:45.643908   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:45.643936   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:45.715669   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:45.715689   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:45.715703   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:43.478814   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:45.978275   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:44.990088   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:46.990498   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:46.954696   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:49.455032   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:48.296204   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:48.308612   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:48.308675   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:48.339308   70152 cri.go:89] found id: ""
	I0924 19:48:48.339335   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.339345   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:48.339353   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:48.339412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:48.377248   70152 cri.go:89] found id: ""
	I0924 19:48:48.377277   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.377286   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:48.377292   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:48.377354   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:48.414199   70152 cri.go:89] found id: ""
	I0924 19:48:48.414230   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.414238   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:48.414244   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:48.414293   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:48.446262   70152 cri.go:89] found id: ""
	I0924 19:48:48.446291   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.446302   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:48.446309   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:48.446369   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:48.477125   70152 cri.go:89] found id: ""
	I0924 19:48:48.477155   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.477166   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:48.477174   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:48.477233   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:48.520836   70152 cri.go:89] found id: ""
	I0924 19:48:48.520867   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.520876   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:48.520881   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:48.520936   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:48.557787   70152 cri.go:89] found id: ""
	I0924 19:48:48.557818   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.557829   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:48.557838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:48.557897   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:48.589636   70152 cri.go:89] found id: ""
	I0924 19:48:48.589670   70152 logs.go:276] 0 containers: []
	W0924 19:48:48.589682   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:48.589692   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:48.589706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:48.667455   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:48.667486   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:48.704523   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:48.704559   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:48.754194   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:48.754223   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:48.766550   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:48.766576   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:48.833394   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:51.333900   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:51.347028   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:51.347094   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:51.383250   70152 cri.go:89] found id: ""
	I0924 19:48:51.383277   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.383285   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:51.383292   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:51.383356   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:51.415238   70152 cri.go:89] found id: ""
	I0924 19:48:51.415269   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.415282   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:51.415289   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:51.415349   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:51.447358   70152 cri.go:89] found id: ""
	I0924 19:48:51.447388   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.447398   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:51.447407   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:51.447469   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:51.479317   70152 cri.go:89] found id: ""
	I0924 19:48:51.479345   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.479354   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:51.479362   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:51.479423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:48.477928   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:50.978108   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:49.491597   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.989509   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:53.989629   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.954573   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:54.455024   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:51.511976   70152 cri.go:89] found id: ""
	I0924 19:48:51.512008   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.512016   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:51.512022   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:51.512074   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:51.544785   70152 cri.go:89] found id: ""
	I0924 19:48:51.544816   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.544824   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:51.544834   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:51.544896   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:51.577475   70152 cri.go:89] found id: ""
	I0924 19:48:51.577508   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.577519   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:51.577527   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:51.577599   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:51.612499   70152 cri.go:89] found id: ""
	I0924 19:48:51.612529   70152 logs.go:276] 0 containers: []
	W0924 19:48:51.612540   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:51.612551   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:51.612564   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:51.648429   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:51.648456   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:51.699980   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:51.700010   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:51.714695   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:51.714723   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:51.781872   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:51.781894   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:51.781909   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:54.361191   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:54.373189   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:54.373242   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:54.405816   70152 cri.go:89] found id: ""
	I0924 19:48:54.405844   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.405854   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:54.405862   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:54.405924   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:54.437907   70152 cri.go:89] found id: ""
	I0924 19:48:54.437935   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.437945   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:54.437952   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:54.438013   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:54.472020   70152 cri.go:89] found id: ""
	I0924 19:48:54.472042   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.472054   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:54.472061   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:54.472122   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:54.507185   70152 cri.go:89] found id: ""
	I0924 19:48:54.507206   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.507215   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:54.507220   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:54.507269   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:54.540854   70152 cri.go:89] found id: ""
	I0924 19:48:54.540887   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.540898   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:54.540905   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:54.540973   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:54.572764   70152 cri.go:89] found id: ""
	I0924 19:48:54.572805   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.572816   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:54.572824   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:54.572897   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:54.605525   70152 cri.go:89] found id: ""
	I0924 19:48:54.605565   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.605573   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:54.605579   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:54.605652   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:54.637320   70152 cri.go:89] found id: ""
	I0924 19:48:54.637341   70152 logs.go:276] 0 containers: []
	W0924 19:48:54.637350   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:54.637357   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:54.637367   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:54.691398   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:54.691433   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:54.704780   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:54.704805   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:54.779461   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:54.779487   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:54.779502   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:54.858131   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:54.858168   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:52.978487   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:55.477749   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:57.479091   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:55.989883   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:58.490132   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:56.954088   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:58.954576   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:00.955423   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:48:57.393677   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:48:57.406202   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:48:57.406273   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:48:57.439351   70152 cri.go:89] found id: ""
	I0924 19:48:57.439381   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.439388   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:48:57.439394   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:48:57.439440   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:48:57.476966   70152 cri.go:89] found id: ""
	I0924 19:48:57.476993   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.477002   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:48:57.477007   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:48:57.477064   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:48:57.510947   70152 cri.go:89] found id: ""
	I0924 19:48:57.510975   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.510986   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:48:57.510994   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:48:57.511054   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:48:57.544252   70152 cri.go:89] found id: ""
	I0924 19:48:57.544277   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.544285   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:48:57.544292   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:48:57.544342   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:48:57.576781   70152 cri.go:89] found id: ""
	I0924 19:48:57.576810   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.576821   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:48:57.576829   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:48:57.576892   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:48:57.614243   70152 cri.go:89] found id: ""
	I0924 19:48:57.614269   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.614277   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:48:57.614283   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:48:57.614349   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:48:57.653477   70152 cri.go:89] found id: ""
	I0924 19:48:57.653506   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.653517   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:48:57.653524   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:48:57.653598   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:48:57.701253   70152 cri.go:89] found id: ""
	I0924 19:48:57.701283   70152 logs.go:276] 0 containers: []
	W0924 19:48:57.701291   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:48:57.701299   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:48:57.701311   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:48:57.721210   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:48:57.721239   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:48:57.799693   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:48:57.799720   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:48:57.799735   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:48:57.881561   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:48:57.881597   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:48:57.917473   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:48:57.917506   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:00.471475   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:00.485727   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:00.485801   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:00.518443   70152 cri.go:89] found id: ""
	I0924 19:49:00.518472   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.518483   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:00.518490   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:00.518555   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:00.553964   70152 cri.go:89] found id: ""
	I0924 19:49:00.553991   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.554001   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:00.554009   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:00.554074   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:00.585507   70152 cri.go:89] found id: ""
	I0924 19:49:00.585537   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.585548   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:00.585555   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:00.585614   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:00.618214   70152 cri.go:89] found id: ""
	I0924 19:49:00.618242   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.618253   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:00.618260   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:00.618319   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:00.649042   70152 cri.go:89] found id: ""
	I0924 19:49:00.649069   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.649077   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:00.649083   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:00.649133   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:00.681021   70152 cri.go:89] found id: ""
	I0924 19:49:00.681050   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.681060   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:00.681067   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:00.681128   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:00.712608   70152 cri.go:89] found id: ""
	I0924 19:49:00.712631   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.712640   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:00.712646   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:00.712693   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:00.744523   70152 cri.go:89] found id: ""
	I0924 19:49:00.744561   70152 logs.go:276] 0 containers: []
	W0924 19:49:00.744572   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:00.744584   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:00.744604   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:00.757179   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:00.757202   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:00.822163   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:00.822186   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:00.822197   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:00.897080   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:00.897125   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:00.934120   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:00.934149   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:48:59.977468   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:01.978394   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:00.491533   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:02.990346   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:03.454971   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:05.954492   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:03.487555   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:03.500318   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:03.500372   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:03.531327   70152 cri.go:89] found id: ""
	I0924 19:49:03.531355   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.531364   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:03.531372   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:03.531437   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:03.563445   70152 cri.go:89] found id: ""
	I0924 19:49:03.563480   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.563491   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:03.563498   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:03.563564   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:03.602093   70152 cri.go:89] found id: ""
	I0924 19:49:03.602118   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.602126   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:03.602134   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:03.602184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:03.633729   70152 cri.go:89] found id: ""
	I0924 19:49:03.633758   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.633769   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:03.633777   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:03.633838   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:03.664122   70152 cri.go:89] found id: ""
	I0924 19:49:03.664144   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.664154   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:03.664162   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:03.664227   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:03.697619   70152 cri.go:89] found id: ""
	I0924 19:49:03.697647   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.697656   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:03.697661   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:03.697714   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:03.729679   70152 cri.go:89] found id: ""
	I0924 19:49:03.729706   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.729714   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:03.729719   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:03.729768   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:03.760459   70152 cri.go:89] found id: ""
	I0924 19:49:03.760489   70152 logs.go:276] 0 containers: []
	W0924 19:49:03.760497   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:03.760505   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:03.760517   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:03.772452   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:03.772475   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:03.836658   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:03.836690   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:03.836706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:03.911243   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:03.911274   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:03.947676   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:03.947699   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:04.478117   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:06.977766   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:04.992137   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:07.490741   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:07.955747   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:10.453756   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:06.501947   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:06.513963   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:06.514037   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:06.546355   70152 cri.go:89] found id: ""
	I0924 19:49:06.546382   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.546393   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:06.546401   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:06.546460   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:06.577502   70152 cri.go:89] found id: ""
	I0924 19:49:06.577530   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.577542   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:06.577554   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:06.577606   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:06.611622   70152 cri.go:89] found id: ""
	I0924 19:49:06.611644   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.611652   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:06.611658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:06.611716   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:06.646558   70152 cri.go:89] found id: ""
	I0924 19:49:06.646581   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.646589   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:06.646594   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:06.646656   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:06.678247   70152 cri.go:89] found id: ""
	I0924 19:49:06.678271   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.678282   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:06.678289   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:06.678351   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:06.718816   70152 cri.go:89] found id: ""
	I0924 19:49:06.718861   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.718874   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:06.718889   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:06.718952   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:06.751762   70152 cri.go:89] found id: ""
	I0924 19:49:06.751787   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.751798   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:06.751806   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:06.751867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:06.783466   70152 cri.go:89] found id: ""
	I0924 19:49:06.783494   70152 logs.go:276] 0 containers: []
	W0924 19:49:06.783502   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:06.783511   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:06.783523   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:06.796746   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:06.796773   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:06.860579   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:06.860608   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:06.860627   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:06.933363   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:06.933394   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:06.973189   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:06.973214   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:09.525823   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:09.537933   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:09.537986   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:09.568463   70152 cri.go:89] found id: ""
	I0924 19:49:09.568492   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.568503   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:09.568511   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:09.568566   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:09.598218   70152 cri.go:89] found id: ""
	I0924 19:49:09.598250   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.598261   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:09.598268   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:09.598325   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:09.631792   70152 cri.go:89] found id: ""
	I0924 19:49:09.631817   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.631828   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:09.631839   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:09.631906   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:09.668544   70152 cri.go:89] found id: ""
	I0924 19:49:09.668578   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.668586   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:09.668592   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:09.668643   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:09.699088   70152 cri.go:89] found id: ""
	I0924 19:49:09.699117   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.699126   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:09.699132   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:09.699192   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:09.731239   70152 cri.go:89] found id: ""
	I0924 19:49:09.731262   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.731273   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:09.731280   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:09.731341   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:09.764349   70152 cri.go:89] found id: ""
	I0924 19:49:09.764372   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.764380   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:09.764386   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:09.764443   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:09.795675   70152 cri.go:89] found id: ""
	I0924 19:49:09.795698   70152 logs.go:276] 0 containers: []
	W0924 19:49:09.795707   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:09.795715   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:09.795733   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:09.829109   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:09.829133   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:09.882630   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:09.882666   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:09.894968   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:09.894992   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:09.955378   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:09.955400   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:09.955415   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:09.477323   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:11.477732   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:09.991122   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.490229   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.453790   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:14.454415   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:12.537431   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:12.549816   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:12.549878   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:12.585422   70152 cri.go:89] found id: ""
	I0924 19:49:12.585445   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.585453   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:12.585459   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:12.585505   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:12.621367   70152 cri.go:89] found id: ""
	I0924 19:49:12.621391   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.621401   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:12.621408   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:12.621471   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:12.656570   70152 cri.go:89] found id: ""
	I0924 19:49:12.656596   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.656603   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:12.656611   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:12.656671   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:12.691193   70152 cri.go:89] found id: ""
	I0924 19:49:12.691215   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.691225   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:12.691233   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:12.691291   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:12.725507   70152 cri.go:89] found id: ""
	I0924 19:49:12.725535   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.725546   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:12.725554   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:12.725614   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:12.757046   70152 cri.go:89] found id: ""
	I0924 19:49:12.757072   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.757083   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:12.757091   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:12.757148   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:12.787049   70152 cri.go:89] found id: ""
	I0924 19:49:12.787075   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.787083   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:12.787088   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:12.787136   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:12.820797   70152 cri.go:89] found id: ""
	I0924 19:49:12.820823   70152 logs.go:276] 0 containers: []
	W0924 19:49:12.820831   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:12.820841   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:12.820859   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:12.873430   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:12.873462   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:12.886207   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:12.886234   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:12.957602   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:12.957623   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:12.957637   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:13.034776   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:13.034811   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:15.571177   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:15.583916   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:15.583981   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:15.618698   70152 cri.go:89] found id: ""
	I0924 19:49:15.618722   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.618730   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:15.618735   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:15.618787   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:15.653693   70152 cri.go:89] found id: ""
	I0924 19:49:15.653726   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.653747   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:15.653755   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:15.653817   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:15.683926   70152 cri.go:89] found id: ""
	I0924 19:49:15.683955   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.683966   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:15.683974   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:15.684031   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:15.718671   70152 cri.go:89] found id: ""
	I0924 19:49:15.718704   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.718716   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:15.718724   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:15.718784   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:15.748861   70152 cri.go:89] found id: ""
	I0924 19:49:15.748892   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.748904   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:15.748911   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:15.748985   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:15.778209   70152 cri.go:89] found id: ""
	I0924 19:49:15.778241   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.778252   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:15.778259   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:15.778323   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:15.808159   70152 cri.go:89] found id: ""
	I0924 19:49:15.808184   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.808192   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:15.808197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:15.808257   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:15.840960   70152 cri.go:89] found id: ""
	I0924 19:49:15.840987   70152 logs.go:276] 0 containers: []
	W0924 19:49:15.840995   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:15.841003   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:15.841016   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:15.891229   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:15.891259   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:15.903910   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:15.903935   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:15.967036   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:15.967061   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:15.967074   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:16.046511   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:16.046545   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:13.477971   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:15.478378   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:14.990141   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:16.990237   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.990750   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:16.954729   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.954769   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:18.586369   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:18.598590   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:18.598680   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:18.631438   70152 cri.go:89] found id: ""
	I0924 19:49:18.631465   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.631476   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:18.631484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:18.631545   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:18.663461   70152 cri.go:89] found id: ""
	I0924 19:49:18.663484   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.663491   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:18.663497   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:18.663556   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:18.696292   70152 cri.go:89] found id: ""
	I0924 19:49:18.696373   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.696398   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:18.696411   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:18.696475   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:18.728037   70152 cri.go:89] found id: ""
	I0924 19:49:18.728062   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.728073   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:18.728079   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:18.728139   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:18.759784   70152 cri.go:89] found id: ""
	I0924 19:49:18.759819   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.759830   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:18.759838   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:18.759902   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:18.791856   70152 cri.go:89] found id: ""
	I0924 19:49:18.791886   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.791893   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:18.791899   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:18.791959   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:18.822678   70152 cri.go:89] found id: ""
	I0924 19:49:18.822708   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.822719   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:18.822730   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:18.822794   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:18.852967   70152 cri.go:89] found id: ""
	I0924 19:49:18.852988   70152 logs.go:276] 0 containers: []
	W0924 19:49:18.852996   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:18.853005   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:18.853016   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:18.902600   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:18.902634   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:18.915475   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:18.915505   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:18.980260   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:18.980285   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:18.980299   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:19.064950   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:19.064986   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:17.977250   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:19.977563   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.977702   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.490563   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:23.989915   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.454031   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:23.954281   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:25.955057   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:21.603752   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:21.616039   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:21.616107   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:21.648228   70152 cri.go:89] found id: ""
	I0924 19:49:21.648253   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.648266   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:21.648274   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:21.648331   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:21.679823   70152 cri.go:89] found id: ""
	I0924 19:49:21.679850   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.679858   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:21.679866   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:21.679928   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:21.712860   70152 cri.go:89] found id: ""
	I0924 19:49:21.712886   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.712895   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:21.712900   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:21.712951   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:21.749711   70152 cri.go:89] found id: ""
	I0924 19:49:21.749735   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.749742   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:21.749748   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:21.749793   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:21.784536   70152 cri.go:89] found id: ""
	I0924 19:49:21.784559   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.784567   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:21.784573   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:21.784631   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:21.813864   70152 cri.go:89] found id: ""
	I0924 19:49:21.813896   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.813907   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:21.813916   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:21.813981   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:21.843610   70152 cri.go:89] found id: ""
	I0924 19:49:21.843639   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.843647   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:21.843653   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:21.843704   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:21.874367   70152 cri.go:89] found id: ""
	I0924 19:49:21.874393   70152 logs.go:276] 0 containers: []
	W0924 19:49:21.874401   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:21.874410   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:21.874421   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:21.923539   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:21.923567   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:21.936994   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:21.937018   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:22.004243   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:22.004264   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:22.004277   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:22.079890   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:22.079921   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:24.616140   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:24.628197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:24.628257   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:24.660873   70152 cri.go:89] found id: ""
	I0924 19:49:24.660902   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.660912   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:24.660919   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:24.660978   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:24.691592   70152 cri.go:89] found id: ""
	I0924 19:49:24.691618   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.691627   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:24.691633   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:24.691682   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:24.725803   70152 cri.go:89] found id: ""
	I0924 19:49:24.725835   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.725843   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:24.725849   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:24.725911   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:24.760080   70152 cri.go:89] found id: ""
	I0924 19:49:24.760112   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.760124   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:24.760131   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:24.760198   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:24.792487   70152 cri.go:89] found id: ""
	I0924 19:49:24.792517   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.792527   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:24.792535   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:24.792615   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:24.825037   70152 cri.go:89] found id: ""
	I0924 19:49:24.825058   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.825066   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:24.825072   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:24.825117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:24.857009   70152 cri.go:89] found id: ""
	I0924 19:49:24.857037   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.857048   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:24.857062   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:24.857119   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:24.887963   70152 cri.go:89] found id: ""
	I0924 19:49:24.887986   70152 logs.go:276] 0 containers: []
	W0924 19:49:24.887994   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:24.888001   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:24.888012   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:24.941971   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:24.942008   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:24.956355   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:24.956385   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:25.020643   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:25.020671   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:25.020686   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:25.095261   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:25.095295   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:24.477423   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:26.477967   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:25.990406   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:28.490276   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:28.454466   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.955002   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:27.632228   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:27.645002   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:27.645059   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:27.677386   70152 cri.go:89] found id: ""
	I0924 19:49:27.677411   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.677420   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:27.677427   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:27.677487   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:27.709731   70152 cri.go:89] found id: ""
	I0924 19:49:27.709760   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.709771   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:27.709779   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:27.709846   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:27.741065   70152 cri.go:89] found id: ""
	I0924 19:49:27.741092   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.741100   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:27.741106   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:27.741165   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:27.771493   70152 cri.go:89] found id: ""
	I0924 19:49:27.771515   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.771524   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:27.771531   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:27.771592   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:27.803233   70152 cri.go:89] found id: ""
	I0924 19:49:27.803266   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.803273   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:27.803279   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:27.803341   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:27.837295   70152 cri.go:89] found id: ""
	I0924 19:49:27.837320   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.837331   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:27.837341   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:27.837412   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:27.867289   70152 cri.go:89] found id: ""
	I0924 19:49:27.867314   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.867323   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:27.867328   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:27.867374   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:27.896590   70152 cri.go:89] found id: ""
	I0924 19:49:27.896615   70152 logs.go:276] 0 containers: []
	W0924 19:49:27.896623   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:27.896634   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:27.896646   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:27.944564   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:27.944596   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:27.958719   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:27.958740   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:28.028986   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:28.029011   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:28.029027   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:28.103888   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:28.103920   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:30.639148   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:30.651500   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:30.651570   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:30.689449   70152 cri.go:89] found id: ""
	I0924 19:49:30.689472   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.689481   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:30.689488   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:30.689566   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:30.722953   70152 cri.go:89] found id: ""
	I0924 19:49:30.722982   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.722993   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:30.723004   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:30.723057   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:30.760960   70152 cri.go:89] found id: ""
	I0924 19:49:30.760985   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.760996   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:30.761004   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:30.761066   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:30.794784   70152 cri.go:89] found id: ""
	I0924 19:49:30.794812   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.794821   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:30.794842   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:30.794894   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:30.826127   70152 cri.go:89] found id: ""
	I0924 19:49:30.826155   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.826164   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:30.826172   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:30.826235   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:30.857392   70152 cri.go:89] found id: ""
	I0924 19:49:30.857422   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.857432   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:30.857446   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:30.857510   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:30.887561   70152 cri.go:89] found id: ""
	I0924 19:49:30.887588   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.887600   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:30.887622   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:30.887692   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:30.922486   70152 cri.go:89] found id: ""
	I0924 19:49:30.922514   70152 logs.go:276] 0 containers: []
	W0924 19:49:30.922526   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:30.922537   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:30.922551   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:30.972454   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:30.972480   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:30.986873   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:30.986895   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:31.060505   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:31.060525   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:31.060544   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:31.138923   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:31.138955   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:28.977756   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.980419   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:30.989909   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:32.991815   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:33.454204   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.454890   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:33.674979   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:33.687073   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:33.687149   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:33.719712   70152 cri.go:89] found id: ""
	I0924 19:49:33.719742   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.719751   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:33.719757   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:33.719810   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:33.751183   70152 cri.go:89] found id: ""
	I0924 19:49:33.751210   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.751221   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:33.751229   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:33.751274   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:33.781748   70152 cri.go:89] found id: ""
	I0924 19:49:33.781781   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.781793   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:33.781801   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:33.781873   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:33.813287   70152 cri.go:89] found id: ""
	I0924 19:49:33.813311   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.813319   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:33.813324   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:33.813369   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:33.848270   70152 cri.go:89] found id: ""
	I0924 19:49:33.848299   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.848311   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:33.848319   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:33.848383   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:33.877790   70152 cri.go:89] found id: ""
	I0924 19:49:33.877817   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.877826   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:33.877832   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:33.877890   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:33.911668   70152 cri.go:89] found id: ""
	I0924 19:49:33.911693   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.911701   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:33.911706   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:33.911759   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:33.943924   70152 cri.go:89] found id: ""
	I0924 19:49:33.943952   70152 logs.go:276] 0 containers: []
	W0924 19:49:33.943963   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:33.943974   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:33.943987   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:33.980520   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:33.980560   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:34.031240   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:34.031275   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:34.044180   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:34.044210   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:34.110143   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:34.110165   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:34.110176   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:33.477340   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.478344   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:35.490449   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:37.989317   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:37.954444   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:39.954569   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:36.694093   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:36.706006   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:36.706080   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:36.738955   70152 cri.go:89] found id: ""
	I0924 19:49:36.738981   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.738990   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:36.738995   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:36.739059   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:36.774414   70152 cri.go:89] found id: ""
	I0924 19:49:36.774437   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.774445   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:36.774451   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:36.774503   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:36.805821   70152 cri.go:89] found id: ""
	I0924 19:49:36.805851   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.805861   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:36.805867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:36.805922   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:36.835128   70152 cri.go:89] found id: ""
	I0924 19:49:36.835154   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.835162   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:36.835168   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:36.835221   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:36.865448   70152 cri.go:89] found id: ""
	I0924 19:49:36.865474   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.865485   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:36.865492   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:36.865552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:36.896694   70152 cri.go:89] found id: ""
	I0924 19:49:36.896722   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.896731   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:36.896736   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:36.896801   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:36.927380   70152 cri.go:89] found id: ""
	I0924 19:49:36.927406   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.927416   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:36.927426   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:36.927484   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:36.957581   70152 cri.go:89] found id: ""
	I0924 19:49:36.957604   70152 logs.go:276] 0 containers: []
	W0924 19:49:36.957614   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:36.957624   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:36.957638   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:37.007182   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:37.007211   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:37.021536   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:37.021561   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:37.092442   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:37.092465   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:37.092477   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:37.167488   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:37.167524   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:39.703778   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:39.715914   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:39.715983   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:39.751296   70152 cri.go:89] found id: ""
	I0924 19:49:39.751319   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.751329   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:39.751341   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:39.751409   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:39.787095   70152 cri.go:89] found id: ""
	I0924 19:49:39.787123   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.787132   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:39.787137   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:39.787184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:39.822142   70152 cri.go:89] found id: ""
	I0924 19:49:39.822164   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.822173   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:39.822179   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:39.822226   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:39.853830   70152 cri.go:89] found id: ""
	I0924 19:49:39.853854   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.853864   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:39.853871   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:39.853932   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:39.891029   70152 cri.go:89] found id: ""
	I0924 19:49:39.891079   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.891091   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:39.891100   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:39.891162   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:39.926162   70152 cri.go:89] found id: ""
	I0924 19:49:39.926194   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.926204   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:39.926211   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:39.926262   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:39.964320   70152 cri.go:89] found id: ""
	I0924 19:49:39.964348   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.964358   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:39.964365   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:39.964421   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:39.997596   70152 cri.go:89] found id: ""
	I0924 19:49:39.997617   70152 logs.go:276] 0 containers: []
	W0924 19:49:39.997627   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:39.997636   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:39.997649   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:40.045538   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:40.045568   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:40.058114   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:40.058139   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:40.125927   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:40.125946   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:40.125958   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:40.202722   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:40.202758   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:37.978393   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:40.476855   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.477425   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:39.990444   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:41.991094   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.454568   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:44.953805   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:42.742707   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:42.754910   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:42.754986   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:42.788775   70152 cri.go:89] found id: ""
	I0924 19:49:42.788798   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.788807   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:42.788813   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:42.788875   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:42.824396   70152 cri.go:89] found id: ""
	I0924 19:49:42.824420   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.824430   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:42.824436   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:42.824498   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:42.854848   70152 cri.go:89] found id: ""
	I0924 19:49:42.854873   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.854880   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:42.854886   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:42.854936   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:42.885033   70152 cri.go:89] found id: ""
	I0924 19:49:42.885056   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.885063   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:42.885069   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:42.885114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:42.914427   70152 cri.go:89] found id: ""
	I0924 19:49:42.914451   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.914458   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:42.914464   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:42.914509   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:42.954444   70152 cri.go:89] found id: ""
	I0924 19:49:42.954471   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.954481   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:42.954488   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:42.954544   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:42.998183   70152 cri.go:89] found id: ""
	I0924 19:49:42.998207   70152 logs.go:276] 0 containers: []
	W0924 19:49:42.998215   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:42.998220   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:42.998273   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:43.041904   70152 cri.go:89] found id: ""
	I0924 19:49:43.041933   70152 logs.go:276] 0 containers: []
	W0924 19:49:43.041944   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:43.041957   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:43.041973   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:43.091733   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:43.091770   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:43.104674   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:43.104707   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:43.169712   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:43.169732   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:43.169745   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:43.248378   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:43.248409   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:45.790015   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:45.801902   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:45.801972   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:45.833030   70152 cri.go:89] found id: ""
	I0924 19:49:45.833053   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.833061   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:45.833066   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:45.833117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:45.863209   70152 cri.go:89] found id: ""
	I0924 19:49:45.863233   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.863241   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:45.863247   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:45.863307   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:45.893004   70152 cri.go:89] found id: ""
	I0924 19:49:45.893035   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.893045   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:45.893053   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:45.893114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:45.924485   70152 cri.go:89] found id: ""
	I0924 19:49:45.924515   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.924527   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:45.924535   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:45.924593   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:45.956880   70152 cri.go:89] found id: ""
	I0924 19:49:45.956907   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.956914   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:45.956919   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:45.956967   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:45.990579   70152 cri.go:89] found id: ""
	I0924 19:49:45.990602   70152 logs.go:276] 0 containers: []
	W0924 19:49:45.990614   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:45.990622   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:45.990677   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:46.025905   70152 cri.go:89] found id: ""
	I0924 19:49:46.025944   70152 logs.go:276] 0 containers: []
	W0924 19:49:46.025959   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:46.025966   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:46.026028   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:46.057401   70152 cri.go:89] found id: ""
	I0924 19:49:46.057427   70152 logs.go:276] 0 containers: []
	W0924 19:49:46.057438   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:46.057449   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:46.057463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:46.107081   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:46.107115   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:46.121398   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:46.121426   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:46.184370   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:46.184395   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:46.184410   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:46.266061   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:46.266104   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:44.477907   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.478391   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:44.489995   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.989227   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.990995   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:46.953875   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.955013   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:48.803970   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:48.816671   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:48.816737   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:48.849566   70152 cri.go:89] found id: ""
	I0924 19:49:48.849628   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.849652   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:48.849660   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:48.849720   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:48.885963   70152 cri.go:89] found id: ""
	I0924 19:49:48.885992   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.885999   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:48.886004   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:48.886054   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:48.921710   70152 cri.go:89] found id: ""
	I0924 19:49:48.921744   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.921755   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:48.921765   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:48.921821   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:48.954602   70152 cri.go:89] found id: ""
	I0924 19:49:48.954639   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.954650   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:48.954658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:48.954718   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:48.988071   70152 cri.go:89] found id: ""
	I0924 19:49:48.988098   70152 logs.go:276] 0 containers: []
	W0924 19:49:48.988109   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:48.988117   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:48.988177   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:49.020475   70152 cri.go:89] found id: ""
	I0924 19:49:49.020503   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.020512   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:49.020519   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:49.020597   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:49.055890   70152 cri.go:89] found id: ""
	I0924 19:49:49.055915   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.055925   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:49.055933   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:49.055999   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:49.092976   70152 cri.go:89] found id: ""
	I0924 19:49:49.093010   70152 logs.go:276] 0 containers: []
	W0924 19:49:49.093022   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:49.093033   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:49.093051   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:49.106598   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:49.106623   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:49.175320   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:49.175349   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:49.175362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:49.252922   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:49.252953   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:49.292364   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:49.292391   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:48.977530   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:50.978078   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.489983   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:53.990114   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.454857   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:53.954413   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:55.955245   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:51.843520   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:51.855864   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:51.855930   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:51.885300   70152 cri.go:89] found id: ""
	I0924 19:49:51.885329   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.885342   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:51.885350   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:51.885407   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:51.915183   70152 cri.go:89] found id: ""
	I0924 19:49:51.915212   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.915223   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:51.915230   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:51.915286   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:51.944774   70152 cri.go:89] found id: ""
	I0924 19:49:51.944797   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.944807   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:51.944815   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:51.944886   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:51.983691   70152 cri.go:89] found id: ""
	I0924 19:49:51.983718   70152 logs.go:276] 0 containers: []
	W0924 19:49:51.983729   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:51.983737   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:51.983791   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:52.019728   70152 cri.go:89] found id: ""
	I0924 19:49:52.019760   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.019770   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:52.019776   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:52.019835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:52.055405   70152 cri.go:89] found id: ""
	I0924 19:49:52.055435   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.055446   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:52.055453   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:52.055518   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:52.088417   70152 cri.go:89] found id: ""
	I0924 19:49:52.088447   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.088457   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:52.088465   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:52.088527   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:52.119496   70152 cri.go:89] found id: ""
	I0924 19:49:52.119527   70152 logs.go:276] 0 containers: []
	W0924 19:49:52.119539   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:52.119550   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:52.119563   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:52.193494   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:52.193529   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:52.231440   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:52.231464   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:52.281384   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:52.281418   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:52.293893   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:52.293919   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:52.362404   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:54.863156   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:54.876871   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:54.876946   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:54.909444   70152 cri.go:89] found id: ""
	I0924 19:49:54.909471   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.909478   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:54.909484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:54.909536   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:54.939687   70152 cri.go:89] found id: ""
	I0924 19:49:54.939715   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.939726   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:54.939733   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:54.939805   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:54.971156   70152 cri.go:89] found id: ""
	I0924 19:49:54.971180   70152 logs.go:276] 0 containers: []
	W0924 19:49:54.971188   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:54.971193   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:54.971244   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:55.001865   70152 cri.go:89] found id: ""
	I0924 19:49:55.001891   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.001899   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:55.001904   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:55.001961   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:55.032044   70152 cri.go:89] found id: ""
	I0924 19:49:55.032072   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.032084   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:55.032092   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:55.032152   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:55.061644   70152 cri.go:89] found id: ""
	I0924 19:49:55.061667   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.061675   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:55.061681   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:55.061727   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:55.093015   70152 cri.go:89] found id: ""
	I0924 19:49:55.093041   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.093049   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:55.093055   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:55.093121   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:55.126041   70152 cri.go:89] found id: ""
	I0924 19:49:55.126065   70152 logs.go:276] 0 containers: []
	W0924 19:49:55.126073   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:55.126081   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:55.126091   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:55.168803   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:55.168826   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:55.227121   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:55.227158   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:49:55.249868   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:55.249893   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:55.316401   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:55.316422   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:55.316434   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:52.978705   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:55.478802   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:56.489685   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:58.990273   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:58.453854   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.954407   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:49:57.898654   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:49:57.910667   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:49:57.910728   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:49:57.942696   70152 cri.go:89] found id: ""
	I0924 19:49:57.942722   70152 logs.go:276] 0 containers: []
	W0924 19:49:57.942730   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:49:57.942736   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:49:57.942802   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:49:57.981222   70152 cri.go:89] found id: ""
	I0924 19:49:57.981244   70152 logs.go:276] 0 containers: []
	W0924 19:49:57.981254   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:49:57.981261   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:49:57.981308   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:49:58.013135   70152 cri.go:89] found id: ""
	I0924 19:49:58.013174   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.013185   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:49:58.013193   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:49:58.013255   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:49:58.048815   70152 cri.go:89] found id: ""
	I0924 19:49:58.048847   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.048859   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:49:58.048867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:49:58.048933   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:49:58.081365   70152 cri.go:89] found id: ""
	I0924 19:49:58.081395   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.081406   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:49:58.081413   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:49:58.081478   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:49:58.112804   70152 cri.go:89] found id: ""
	I0924 19:49:58.112828   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.112838   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:49:58.112848   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:49:58.112913   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:49:58.147412   70152 cri.go:89] found id: ""
	I0924 19:49:58.147448   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.147459   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:49:58.147467   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:49:58.147529   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:49:58.178922   70152 cri.go:89] found id: ""
	I0924 19:49:58.178952   70152 logs.go:276] 0 containers: []
	W0924 19:49:58.178963   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:49:58.178974   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:49:58.178993   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:49:58.250967   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:49:58.250993   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:49:58.251011   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:49:58.329734   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:49:58.329767   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:58.366692   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:49:58.366722   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:49:58.418466   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:49:58.418503   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:00.931624   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:00.949687   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:00.949756   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:01.004428   70152 cri.go:89] found id: ""
	I0924 19:50:01.004456   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.004464   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:01.004471   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:01.004532   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:01.038024   70152 cri.go:89] found id: ""
	I0924 19:50:01.038050   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.038060   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:01.038065   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:01.038111   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:01.069831   70152 cri.go:89] found id: ""
	I0924 19:50:01.069855   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.069862   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:01.069867   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:01.069933   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:01.100918   70152 cri.go:89] found id: ""
	I0924 19:50:01.100944   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.100951   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:01.100957   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:01.101006   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:01.131309   70152 cri.go:89] found id: ""
	I0924 19:50:01.131340   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.131351   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:01.131359   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:01.131419   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:01.161779   70152 cri.go:89] found id: ""
	I0924 19:50:01.161806   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.161817   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:01.161825   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:01.161888   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:01.196626   70152 cri.go:89] found id: ""
	I0924 19:50:01.196655   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.196665   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:01.196672   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:01.196733   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:01.226447   70152 cri.go:89] found id: ""
	I0924 19:50:01.226475   70152 logs.go:276] 0 containers: []
	W0924 19:50:01.226486   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:01.226496   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:01.226510   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:01.279093   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:01.279121   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:01.292435   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:01.292463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:01.360868   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:01.360901   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:01.360917   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:01.442988   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:01.443021   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:49:57.978989   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.477211   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:02.477451   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:00.990593   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:03.489738   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:02.955427   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:05.455000   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:03.984021   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:03.997429   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:03.997508   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:04.030344   70152 cri.go:89] found id: ""
	I0924 19:50:04.030374   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.030387   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:04.030395   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:04.030448   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:04.063968   70152 cri.go:89] found id: ""
	I0924 19:50:04.064003   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.064016   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:04.064023   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:04.064083   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:04.097724   70152 cri.go:89] found id: ""
	I0924 19:50:04.097752   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.097764   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:04.097772   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:04.097825   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:04.129533   70152 cri.go:89] found id: ""
	I0924 19:50:04.129570   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.129580   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:04.129588   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:04.129665   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:04.166056   70152 cri.go:89] found id: ""
	I0924 19:50:04.166086   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.166098   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:04.166105   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:04.166164   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:04.200051   70152 cri.go:89] found id: ""
	I0924 19:50:04.200077   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.200087   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:04.200094   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:04.200205   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:04.232647   70152 cri.go:89] found id: ""
	I0924 19:50:04.232671   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.232679   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:04.232686   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:04.232744   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:04.264091   70152 cri.go:89] found id: ""
	I0924 19:50:04.264115   70152 logs.go:276] 0 containers: []
	W0924 19:50:04.264123   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:04.264131   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:04.264140   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:04.313904   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:04.313939   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:04.326759   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:04.326782   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:04.390347   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:04.390372   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:04.390389   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:04.470473   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:04.470509   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:04.478092   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:06.976928   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:05.490259   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.490644   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.954747   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:10.455548   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:07.009267   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:07.022465   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:07.022534   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:07.053438   70152 cri.go:89] found id: ""
	I0924 19:50:07.053466   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.053476   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:07.053484   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:07.053552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:07.085802   70152 cri.go:89] found id: ""
	I0924 19:50:07.085824   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.085833   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:07.085840   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:07.085903   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:07.121020   70152 cri.go:89] found id: ""
	I0924 19:50:07.121043   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.121051   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:07.121056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:07.121108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:07.150529   70152 cri.go:89] found id: ""
	I0924 19:50:07.150557   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.150568   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:07.150576   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:07.150663   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:07.181915   70152 cri.go:89] found id: ""
	I0924 19:50:07.181942   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.181953   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:07.181959   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:07.182021   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:07.215152   70152 cri.go:89] found id: ""
	I0924 19:50:07.215185   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.215195   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:07.215203   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:07.215263   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:07.248336   70152 cri.go:89] found id: ""
	I0924 19:50:07.248365   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.248373   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:07.248378   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:07.248423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:07.281829   70152 cri.go:89] found id: ""
	I0924 19:50:07.281854   70152 logs.go:276] 0 containers: []
	W0924 19:50:07.281862   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:07.281871   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:07.281885   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:07.329674   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:07.329706   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:07.342257   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:07.342283   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:07.406426   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:07.406452   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:07.406466   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:07.493765   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:07.493796   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:10.033393   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:10.046435   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:10.046513   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:10.077993   70152 cri.go:89] found id: ""
	I0924 19:50:10.078024   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.078034   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:10.078044   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:10.078108   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:10.115200   70152 cri.go:89] found id: ""
	I0924 19:50:10.115232   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.115243   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:10.115251   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:10.115317   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:10.151154   70152 cri.go:89] found id: ""
	I0924 19:50:10.151179   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.151189   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:10.151197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:10.151254   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:10.184177   70152 cri.go:89] found id: ""
	I0924 19:50:10.184204   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.184212   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:10.184218   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:10.184268   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:10.218932   70152 cri.go:89] found id: ""
	I0924 19:50:10.218962   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.218973   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:10.218981   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:10.219042   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:10.250973   70152 cri.go:89] found id: ""
	I0924 19:50:10.251001   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.251012   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:10.251020   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:10.251076   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:10.280296   70152 cri.go:89] found id: ""
	I0924 19:50:10.280319   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.280328   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:10.280333   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:10.280385   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:10.312386   70152 cri.go:89] found id: ""
	I0924 19:50:10.312411   70152 logs.go:276] 0 containers: []
	W0924 19:50:10.312419   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:10.312426   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:10.312437   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:10.377281   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:10.377309   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:10.377326   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:10.451806   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:10.451839   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:10.489154   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:10.489184   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:10.536203   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:10.536233   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:08.977378   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:10.977966   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:09.990141   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:11.990257   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:13.990360   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:12.954861   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.454763   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:13.049785   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:13.062642   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:13.062720   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:13.096627   70152 cri.go:89] found id: ""
	I0924 19:50:13.096658   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.096669   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:13.096680   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:13.096743   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:13.127361   70152 cri.go:89] found id: ""
	I0924 19:50:13.127389   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.127400   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:13.127409   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:13.127468   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:13.160081   70152 cri.go:89] found id: ""
	I0924 19:50:13.160111   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.160123   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:13.160131   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:13.160184   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:13.192955   70152 cri.go:89] found id: ""
	I0924 19:50:13.192986   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.192997   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:13.193004   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:13.193057   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:13.230978   70152 cri.go:89] found id: ""
	I0924 19:50:13.231000   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.231008   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:13.231014   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:13.231064   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:13.262146   70152 cri.go:89] found id: ""
	I0924 19:50:13.262179   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.262190   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:13.262198   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:13.262258   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:13.297019   70152 cri.go:89] found id: ""
	I0924 19:50:13.297054   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.297063   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:13.297070   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:13.297117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:13.327009   70152 cri.go:89] found id: ""
	I0924 19:50:13.327037   70152 logs.go:276] 0 containers: []
	W0924 19:50:13.327046   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:13.327057   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:13.327073   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:13.375465   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:13.375493   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:13.389851   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:13.389884   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:13.452486   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:13.452524   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:13.452538   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:13.531372   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:13.531405   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:16.066979   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:16.079767   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:16.079825   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:16.110927   70152 cri.go:89] found id: ""
	I0924 19:50:16.110951   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.110960   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:16.110965   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:16.111011   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:16.142012   70152 cri.go:89] found id: ""
	I0924 19:50:16.142040   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.142050   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:16.142055   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:16.142112   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:16.175039   70152 cri.go:89] found id: ""
	I0924 19:50:16.175068   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.175079   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:16.175086   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:16.175146   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:16.206778   70152 cri.go:89] found id: ""
	I0924 19:50:16.206800   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.206808   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:16.206814   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:16.206890   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:16.237724   70152 cri.go:89] found id: ""
	I0924 19:50:16.237752   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.237763   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:16.237770   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:16.237835   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:16.268823   70152 cri.go:89] found id: ""
	I0924 19:50:16.268846   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.268855   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:16.268861   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:16.268931   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:16.301548   70152 cri.go:89] found id: ""
	I0924 19:50:16.301570   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.301578   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:16.301584   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:16.301635   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:16.334781   70152 cri.go:89] found id: ""
	I0924 19:50:16.334812   70152 logs.go:276] 0 containers: []
	W0924 19:50:16.334820   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:16.334844   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:16.334864   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:16.384025   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:16.384057   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:16.396528   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:16.396556   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:16.460428   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:16.460458   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:16.460472   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:12.978203   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.477525   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.478192   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:15.990394   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.991181   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:17.955580   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:20.455446   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:16.541109   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:16.541146   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:19.078388   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:19.090964   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:19.091052   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:19.122890   70152 cri.go:89] found id: ""
	I0924 19:50:19.122915   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.122923   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:19.122928   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:19.122988   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:19.155983   70152 cri.go:89] found id: ""
	I0924 19:50:19.156013   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.156024   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:19.156031   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:19.156085   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:19.190366   70152 cri.go:89] found id: ""
	I0924 19:50:19.190389   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.190397   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:19.190403   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:19.190459   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:19.221713   70152 cri.go:89] found id: ""
	I0924 19:50:19.221737   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.221745   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:19.221751   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:19.221809   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:19.256586   70152 cri.go:89] found id: ""
	I0924 19:50:19.256615   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.256625   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:19.256637   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:19.256700   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:19.288092   70152 cri.go:89] found id: ""
	I0924 19:50:19.288119   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.288130   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:19.288141   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:19.288204   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:19.320743   70152 cri.go:89] found id: ""
	I0924 19:50:19.320771   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.320780   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:19.320785   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:19.320837   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:19.352967   70152 cri.go:89] found id: ""
	I0924 19:50:19.352999   70152 logs.go:276] 0 containers: []
	W0924 19:50:19.353009   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:19.353019   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:19.353035   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:19.365690   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:19.365715   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:19.431204   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:19.431229   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:19.431244   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:19.512030   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:19.512063   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:19.549631   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:19.549664   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:19.977859   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:21.978267   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:20.489819   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.490667   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.954178   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:24.954267   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:22.105290   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:22.117532   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:22.117607   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:22.147959   70152 cri.go:89] found id: ""
	I0924 19:50:22.147983   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.147994   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:22.148002   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:22.148060   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:22.178511   70152 cri.go:89] found id: ""
	I0924 19:50:22.178540   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.178551   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:22.178556   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:22.178603   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:22.210030   70152 cri.go:89] found id: ""
	I0924 19:50:22.210054   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.210061   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:22.210067   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:22.210125   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:22.243010   70152 cri.go:89] found id: ""
	I0924 19:50:22.243037   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.243048   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:22.243056   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:22.243117   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:22.273021   70152 cri.go:89] found id: ""
	I0924 19:50:22.273051   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.273062   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:22.273069   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:22.273133   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:22.303372   70152 cri.go:89] found id: ""
	I0924 19:50:22.303403   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.303415   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:22.303422   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:22.303481   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:22.335124   70152 cri.go:89] found id: ""
	I0924 19:50:22.335150   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.335158   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:22.335164   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:22.335222   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:22.368230   70152 cri.go:89] found id: ""
	I0924 19:50:22.368255   70152 logs.go:276] 0 containers: []
	W0924 19:50:22.368265   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:22.368276   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:22.368290   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:22.418998   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:22.419031   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:22.431654   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:22.431684   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:22.505336   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:22.505354   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:22.505367   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:22.584941   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:22.584976   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:25.127489   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:25.140142   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:25.140216   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:25.169946   70152 cri.go:89] found id: ""
	I0924 19:50:25.169974   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.169982   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:25.169988   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:25.170049   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:25.203298   70152 cri.go:89] found id: ""
	I0924 19:50:25.203328   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.203349   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:25.203357   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:25.203419   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:25.236902   70152 cri.go:89] found id: ""
	I0924 19:50:25.236930   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.236941   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:25.236949   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:25.237011   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:25.268295   70152 cri.go:89] found id: ""
	I0924 19:50:25.268318   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.268328   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:25.268333   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:25.268388   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:25.299869   70152 cri.go:89] found id: ""
	I0924 19:50:25.299899   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.299911   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:25.299919   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:25.299978   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:25.332373   70152 cri.go:89] found id: ""
	I0924 19:50:25.332400   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.332411   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:25.332418   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:25.332477   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:25.365791   70152 cri.go:89] found id: ""
	I0924 19:50:25.365820   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.365831   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:25.365839   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:25.365904   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:25.398170   70152 cri.go:89] found id: ""
	I0924 19:50:25.398193   70152 logs.go:276] 0 containers: []
	W0924 19:50:25.398201   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:25.398209   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:25.398220   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:25.447933   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:25.447967   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:25.461244   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:25.461269   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:25.528100   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:25.528125   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:25.528138   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:25.603029   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:25.603062   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:24.477585   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:26.477776   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:24.491205   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:26.990562   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:27.454650   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:29.954657   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:28.141635   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:28.154551   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:28.154611   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:28.186275   70152 cri.go:89] found id: ""
	I0924 19:50:28.186299   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.186307   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:28.186312   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:28.186371   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:28.218840   70152 cri.go:89] found id: ""
	I0924 19:50:28.218868   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.218879   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:28.218887   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:28.218955   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:28.253478   70152 cri.go:89] found id: ""
	I0924 19:50:28.253503   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.253512   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:28.253519   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:28.253579   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:28.284854   70152 cri.go:89] found id: ""
	I0924 19:50:28.284888   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.284899   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:28.284908   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:28.284959   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:28.315453   70152 cri.go:89] found id: ""
	I0924 19:50:28.315478   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.315487   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:28.315500   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:28.315550   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:28.347455   70152 cri.go:89] found id: ""
	I0924 19:50:28.347484   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.347492   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:28.347498   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:28.347552   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:28.383651   70152 cri.go:89] found id: ""
	I0924 19:50:28.383683   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.383694   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:28.383702   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:28.383766   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:28.424649   70152 cri.go:89] found id: ""
	I0924 19:50:28.424682   70152 logs.go:276] 0 containers: []
	W0924 19:50:28.424693   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:28.424704   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:28.424718   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:28.477985   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:28.478020   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:28.490902   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:28.490930   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:28.561252   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:28.561273   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:28.561284   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:28.635590   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:28.635635   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:31.172062   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:31.184868   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:31.184939   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:31.216419   70152 cri.go:89] found id: ""
	I0924 19:50:31.216446   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.216456   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:31.216464   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:31.216525   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:31.252757   70152 cri.go:89] found id: ""
	I0924 19:50:31.252787   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.252797   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:31.252804   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:31.252867   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:31.287792   70152 cri.go:89] found id: ""
	I0924 19:50:31.287820   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.287827   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:31.287833   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:31.287883   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:31.322891   70152 cri.go:89] found id: ""
	I0924 19:50:31.322917   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.322927   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:31.322934   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:31.322997   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:31.358353   70152 cri.go:89] found id: ""
	I0924 19:50:31.358384   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.358394   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:31.358401   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:31.358461   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:31.388617   70152 cri.go:89] found id: ""
	I0924 19:50:31.388643   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.388654   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:31.388661   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:31.388714   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:31.421655   70152 cri.go:89] found id: ""
	I0924 19:50:31.421682   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.421690   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:31.421695   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:31.421747   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:31.456995   70152 cri.go:89] found id: ""
	I0924 19:50:31.457020   70152 logs.go:276] 0 containers: []
	W0924 19:50:31.457029   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:31.457037   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:31.457048   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:28.478052   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:30.977483   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:29.490310   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:31.990052   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:33.991439   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:32.454421   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:34.456333   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:31.507691   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:31.507725   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:31.521553   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:31.521582   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:31.587673   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:31.587695   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:31.587710   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:31.674153   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:31.674193   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:34.213947   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:34.227779   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:34.227852   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:34.265513   70152 cri.go:89] found id: ""
	I0924 19:50:34.265541   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.265568   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:34.265575   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:34.265632   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:34.305317   70152 cri.go:89] found id: ""
	I0924 19:50:34.305340   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.305348   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:34.305354   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:34.305402   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:34.341144   70152 cri.go:89] found id: ""
	I0924 19:50:34.341168   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.341176   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:34.341183   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:34.341232   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:34.372469   70152 cri.go:89] found id: ""
	I0924 19:50:34.372491   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.372499   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:34.372505   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:34.372551   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:34.408329   70152 cri.go:89] found id: ""
	I0924 19:50:34.408351   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.408360   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:34.408365   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:34.408423   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:34.440666   70152 cri.go:89] found id: ""
	I0924 19:50:34.440695   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.440707   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:34.440714   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:34.440782   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:34.475013   70152 cri.go:89] found id: ""
	I0924 19:50:34.475040   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.475047   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:34.475053   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:34.475105   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:34.507051   70152 cri.go:89] found id: ""
	I0924 19:50:34.507077   70152 logs.go:276] 0 containers: []
	W0924 19:50:34.507084   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:34.507092   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:34.507102   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:34.562506   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:34.562549   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:34.575316   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:34.575340   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:34.641903   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:34.641927   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:34.641938   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:34.719868   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:34.719903   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:32.978271   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:35.477581   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:37.479350   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:36.490263   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:38.490795   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:36.953906   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:38.955474   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:37.279465   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:37.291991   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:37.292065   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:37.322097   70152 cri.go:89] found id: ""
	I0924 19:50:37.322123   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.322134   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:37.322141   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:37.322199   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:37.353697   70152 cri.go:89] found id: ""
	I0924 19:50:37.353729   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.353740   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:37.353748   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:37.353807   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:37.385622   70152 cri.go:89] found id: ""
	I0924 19:50:37.385653   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.385664   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:37.385672   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:37.385735   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:37.420972   70152 cri.go:89] found id: ""
	I0924 19:50:37.420995   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.421004   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:37.421012   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:37.421070   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:37.451496   70152 cri.go:89] found id: ""
	I0924 19:50:37.451523   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.451534   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:37.451541   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:37.451619   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:37.486954   70152 cri.go:89] found id: ""
	I0924 19:50:37.486982   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.486992   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:37.487000   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:37.487061   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:37.523068   70152 cri.go:89] found id: ""
	I0924 19:50:37.523089   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.523097   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:37.523105   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:37.523165   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:37.559935   70152 cri.go:89] found id: ""
	I0924 19:50:37.559962   70152 logs.go:276] 0 containers: []
	W0924 19:50:37.559970   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:37.559978   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:37.559988   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:37.597976   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:37.598006   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:37.647577   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:37.647610   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:37.660872   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:37.660901   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:37.728264   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:37.728293   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:37.728307   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:40.308026   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:40.320316   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:40.320373   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:40.357099   70152 cri.go:89] found id: ""
	I0924 19:50:40.357127   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.357137   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:40.357145   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:40.357207   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:40.390676   70152 cri.go:89] found id: ""
	I0924 19:50:40.390703   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.390712   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:40.390717   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:40.390766   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:40.422752   70152 cri.go:89] found id: ""
	I0924 19:50:40.422784   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.422796   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:40.422804   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:40.422887   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:40.457024   70152 cri.go:89] found id: ""
	I0924 19:50:40.457046   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.457054   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:40.457059   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:40.457106   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:40.503120   70152 cri.go:89] found id: ""
	I0924 19:50:40.503149   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.503160   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:40.503168   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:40.503225   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:40.543399   70152 cri.go:89] found id: ""
	I0924 19:50:40.543426   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.543435   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:40.543441   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:40.543487   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:40.577654   70152 cri.go:89] found id: ""
	I0924 19:50:40.577679   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.577690   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:40.577698   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:40.577754   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:40.610097   70152 cri.go:89] found id: ""
	I0924 19:50:40.610120   70152 logs.go:276] 0 containers: []
	W0924 19:50:40.610128   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:40.610136   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:40.610145   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:40.661400   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:40.661436   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:40.674254   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:40.674284   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:40.740319   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:40.740342   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:40.740352   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:40.818666   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:40.818704   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:39.979184   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:41.981561   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:40.491417   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:42.991420   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:41.454480   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:43.456158   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:45.955070   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:43.356693   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:43.369234   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:43.369295   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:43.407933   70152 cri.go:89] found id: ""
	I0924 19:50:43.407960   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.407971   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:43.407978   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:43.408037   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:43.442923   70152 cri.go:89] found id: ""
	I0924 19:50:43.442956   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.442968   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:43.442979   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:43.443029   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:43.478148   70152 cri.go:89] found id: ""
	I0924 19:50:43.478177   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.478189   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:43.478197   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:43.478256   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:43.515029   70152 cri.go:89] found id: ""
	I0924 19:50:43.515060   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.515071   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:43.515079   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:43.515144   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:43.551026   70152 cri.go:89] found id: ""
	I0924 19:50:43.551058   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.551070   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:43.551077   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:43.551140   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:43.587155   70152 cri.go:89] found id: ""
	I0924 19:50:43.587188   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.587197   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:43.587205   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:43.587263   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:43.620935   70152 cri.go:89] found id: ""
	I0924 19:50:43.620958   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.620976   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:43.620984   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:43.621045   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:43.654477   70152 cri.go:89] found id: ""
	I0924 19:50:43.654512   70152 logs.go:276] 0 containers: []
	W0924 19:50:43.654523   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:43.654534   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:43.654546   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:43.689352   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:43.689385   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:43.742646   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:43.742683   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:43.755773   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:43.755798   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:43.818546   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:43.818577   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:43.818595   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:46.397466   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:46.410320   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:46.410392   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:46.443003   70152 cri.go:89] found id: ""
	I0924 19:50:46.443029   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.443041   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:46.443049   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:46.443114   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:46.484239   70152 cri.go:89] found id: ""
	I0924 19:50:46.484264   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.484274   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:46.484282   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:46.484339   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:43.981787   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:46.478489   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:45.489723   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:47.491171   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:47.955545   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:50.454211   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:46.519192   70152 cri.go:89] found id: ""
	I0924 19:50:46.519221   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.519230   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:46.519236   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:46.519286   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:46.554588   70152 cri.go:89] found id: ""
	I0924 19:50:46.554611   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.554619   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:46.554626   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:46.554685   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:46.586074   70152 cri.go:89] found id: ""
	I0924 19:50:46.586101   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.586110   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:46.586116   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:46.586167   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:46.620119   70152 cri.go:89] found id: ""
	I0924 19:50:46.620149   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.620159   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:46.620166   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:46.620226   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:46.653447   70152 cri.go:89] found id: ""
	I0924 19:50:46.653477   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.653488   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:46.653495   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:46.653557   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:46.686079   70152 cri.go:89] found id: ""
	I0924 19:50:46.686105   70152 logs.go:276] 0 containers: []
	W0924 19:50:46.686116   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:46.686127   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:46.686140   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:46.699847   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:46.699891   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:46.766407   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:46.766432   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:46.766447   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:46.846697   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:46.846730   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:46.901551   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:46.901578   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:49.460047   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:49.473516   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:49.473586   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:49.508180   70152 cri.go:89] found id: ""
	I0924 19:50:49.508211   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.508220   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:49.508226   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:49.508289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:49.540891   70152 cri.go:89] found id: ""
	I0924 19:50:49.540920   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.540928   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:49.540934   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:49.540984   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:49.577008   70152 cri.go:89] found id: ""
	I0924 19:50:49.577038   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.577048   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:49.577054   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:49.577132   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:49.615176   70152 cri.go:89] found id: ""
	I0924 19:50:49.615206   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.615216   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:49.615226   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:49.615289   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:49.653135   70152 cri.go:89] found id: ""
	I0924 19:50:49.653167   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.653177   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:49.653184   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:49.653250   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:49.691032   70152 cri.go:89] found id: ""
	I0924 19:50:49.691064   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.691074   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:49.691080   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:49.691143   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:49.725243   70152 cri.go:89] found id: ""
	I0924 19:50:49.725274   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.725287   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:49.725294   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:49.725363   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:49.759288   70152 cri.go:89] found id: ""
	I0924 19:50:49.759316   70152 logs.go:276] 0 containers: []
	W0924 19:50:49.759325   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:49.759333   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:49.759345   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:49.831323   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:49.831345   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:49.831362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:49.907302   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:49.907336   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:49.946386   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:49.946424   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:50.002321   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:50.002362   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:48.978153   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:51.477442   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:49.991214   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.490034   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.454585   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:54.455120   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:52.517380   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:52.531613   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:52.531671   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:52.568158   70152 cri.go:89] found id: ""
	I0924 19:50:52.568188   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.568199   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:52.568207   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:52.568258   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:52.606203   70152 cri.go:89] found id: ""
	I0924 19:50:52.606232   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.606241   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:52.606247   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:52.606307   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:52.647180   70152 cri.go:89] found id: ""
	I0924 19:50:52.647206   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.647218   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:52.647226   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:52.647290   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:52.692260   70152 cri.go:89] found id: ""
	I0924 19:50:52.692289   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.692308   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:52.692316   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:52.692382   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:52.745648   70152 cri.go:89] found id: ""
	I0924 19:50:52.745673   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.745684   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:52.745693   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:52.745759   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:52.782429   70152 cri.go:89] found id: ""
	I0924 19:50:52.782451   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.782458   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:52.782463   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:52.782510   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:52.817286   70152 cri.go:89] found id: ""
	I0924 19:50:52.817312   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.817320   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:52.817326   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:52.817387   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:52.851401   70152 cri.go:89] found id: ""
	I0924 19:50:52.851433   70152 logs.go:276] 0 containers: []
	W0924 19:50:52.851442   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:52.851451   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:52.851463   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:52.921634   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:52.921661   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:52.921674   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:53.005676   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:53.005710   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:53.042056   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:53.042092   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:53.092871   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:53.092908   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:55.605865   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:55.618713   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:55.618791   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:55.652326   70152 cri.go:89] found id: ""
	I0924 19:50:55.652354   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.652364   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:55.652372   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:55.652434   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:55.686218   70152 cri.go:89] found id: ""
	I0924 19:50:55.686241   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.686249   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:55.686256   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:55.686318   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:55.718678   70152 cri.go:89] found id: ""
	I0924 19:50:55.718704   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.718713   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:55.718720   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:55.718789   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:55.750122   70152 cri.go:89] found id: ""
	I0924 19:50:55.750149   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.750157   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:55.750163   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:55.750213   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:55.780676   70152 cri.go:89] found id: ""
	I0924 19:50:55.780706   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.780717   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:55.780724   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:55.780806   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:55.814742   70152 cri.go:89] found id: ""
	I0924 19:50:55.814771   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.814783   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:55.814790   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:55.814872   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:55.847599   70152 cri.go:89] found id: ""
	I0924 19:50:55.847624   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.847635   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:55.847643   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:55.847708   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:55.882999   70152 cri.go:89] found id: ""
	I0924 19:50:55.883025   70152 logs.go:276] 0 containers: []
	W0924 19:50:55.883034   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:55.883042   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:55.883053   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:55.948795   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:55.948823   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:55.948840   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:56.032946   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:56.032984   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:56.069628   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:56.069657   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:56.118408   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:56.118444   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:53.478043   69576 pod_ready.go:103] pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:53.979410   69576 pod_ready.go:82] duration metric: took 4m0.007472265s for pod "metrics-server-6867b74b74-w7bfj" in "kube-system" namespace to be "Ready" ...
	E0924 19:50:53.979439   69576 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0924 19:50:53.979449   69576 pod_ready.go:39] duration metric: took 4m5.045187364s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:50:53.979468   69576 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:50:53.979501   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:53.979557   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:54.014613   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:54.014636   69576 cri.go:89] found id: ""
	I0924 19:50:54.014646   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:50:54.014702   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.019232   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:54.019304   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:54.054018   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:54.054042   69576 cri.go:89] found id: ""
	I0924 19:50:54.054050   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:50:54.054111   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.057867   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:54.057937   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:54.090458   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:54.090485   69576 cri.go:89] found id: ""
	I0924 19:50:54.090495   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:50:54.090549   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.094660   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:54.094735   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:54.128438   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:54.128462   69576 cri.go:89] found id: ""
	I0924 19:50:54.128471   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:50:54.128524   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.132209   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:54.132261   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:54.170563   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:54.170584   69576 cri.go:89] found id: ""
	I0924 19:50:54.170591   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:50:54.170640   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.174546   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:54.174615   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:54.211448   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:54.211468   69576 cri.go:89] found id: ""
	I0924 19:50:54.211475   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:50:54.211521   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.215297   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:54.215350   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:54.252930   69576 cri.go:89] found id: ""
	I0924 19:50:54.252955   69576 logs.go:276] 0 containers: []
	W0924 19:50:54.252963   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:54.252969   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:50:54.253023   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:50:54.296111   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:54.296135   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:54.296141   69576 cri.go:89] found id: ""
	I0924 19:50:54.296148   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:50:54.296194   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.299983   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:54.303864   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:50:54.303899   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:54.340679   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:54.340703   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:54.867298   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:50:54.867333   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:54.908630   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:54.908659   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:54.974028   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:50:54.974059   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:55.034164   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:50:55.034200   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:55.070416   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:50:55.070453   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:55.107831   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:50:55.107857   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:55.143183   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:55.143215   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:55.160049   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:55.160082   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:50:55.267331   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:50:55.267367   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:55.310718   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:50:55.310750   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:55.349628   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:50:55.349656   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:54.990762   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:57.490198   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:56.954742   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:58.955989   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:50:58.631571   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:58.645369   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:58.645437   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:58.679988   70152 cri.go:89] found id: ""
	I0924 19:50:58.680016   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.680027   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:50:58.680034   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:58.680095   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:58.717081   70152 cri.go:89] found id: ""
	I0924 19:50:58.717104   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.717114   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:50:58.717121   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:58.717182   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:58.749093   70152 cri.go:89] found id: ""
	I0924 19:50:58.749115   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.749124   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:50:58.749129   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:58.749175   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:58.785026   70152 cri.go:89] found id: ""
	I0924 19:50:58.785056   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.785078   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:50:58.785086   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:58.785174   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:58.821615   70152 cri.go:89] found id: ""
	I0924 19:50:58.821641   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.821651   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:50:58.821658   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:58.821718   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:58.857520   70152 cri.go:89] found id: ""
	I0924 19:50:58.857549   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.857561   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:50:58.857569   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:58.857638   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:58.892972   70152 cri.go:89] found id: ""
	I0924 19:50:58.892997   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.893008   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:58.893016   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:50:58.893082   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:50:58.924716   70152 cri.go:89] found id: ""
	I0924 19:50:58.924743   70152 logs.go:276] 0 containers: []
	W0924 19:50:58.924756   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:50:58.924764   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:50:58.924776   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:58.961221   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:58.961249   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:59.013865   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:59.013892   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:59.028436   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:59.028472   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:50:59.099161   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:50:59.099187   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:59.099201   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:57.916622   69576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:50:57.931591   69576 api_server.go:72] duration metric: took 4m15.73662766s to wait for apiserver process to appear ...
	I0924 19:50:57.931630   69576 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:50:57.931675   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:50:57.931721   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:50:57.969570   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:57.969597   69576 cri.go:89] found id: ""
	I0924 19:50:57.969604   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:50:57.969650   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:57.973550   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:50:57.973602   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:50:58.015873   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:58.015897   69576 cri.go:89] found id: ""
	I0924 19:50:58.015907   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:50:58.015959   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.020777   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:50:58.020848   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:50:58.052771   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:58.052792   69576 cri.go:89] found id: ""
	I0924 19:50:58.052801   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:50:58.052861   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.056640   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:50:58.056709   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:50:58.092869   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:50:58.092888   69576 cri.go:89] found id: ""
	I0924 19:50:58.092894   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:50:58.092949   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.097223   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:50:58.097293   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:50:58.131376   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:58.131403   69576 cri.go:89] found id: ""
	I0924 19:50:58.131414   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:50:58.131498   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.135886   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:50:58.135943   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:50:58.171962   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:58.171985   69576 cri.go:89] found id: ""
	I0924 19:50:58.171992   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:50:58.172037   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.175714   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:50:58.175770   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:50:58.209329   69576 cri.go:89] found id: ""
	I0924 19:50:58.209358   69576 logs.go:276] 0 containers: []
	W0924 19:50:58.209366   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:50:58.209372   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:50:58.209432   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:50:58.242311   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:58.242331   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:58.242336   69576 cri.go:89] found id: ""
	I0924 19:50:58.242344   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:50:58.242399   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.246774   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:50:58.250891   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:50:58.250909   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:58.736768   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:50:58.736811   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:50:58.838645   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:50:58.838673   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:50:58.884334   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:50:58.884366   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:50:58.933785   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:50:58.933817   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:50:58.968065   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:50:58.968099   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:50:59.007212   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:50:59.007238   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:50:59.067571   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:50:59.067608   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:50:59.103890   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:50:59.103913   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:50:59.157991   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:50:59.158021   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:50:59.225690   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:50:59.225724   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:50:59.239742   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:50:59.239768   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:50:59.272319   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:50:59.272354   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:01.809089   69576 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0924 19:51:01.813972   69576 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0924 19:51:01.815080   69576 api_server.go:141] control plane version: v1.31.1
	I0924 19:51:01.815100   69576 api_server.go:131] duration metric: took 3.883463484s to wait for apiserver health ...
	I0924 19:51:01.815107   69576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:51:01.815127   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:51:01.815166   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:51:01.857140   69576 cri.go:89] found id: "8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:51:01.857164   69576 cri.go:89] found id: ""
	I0924 19:51:01.857174   69576 logs.go:276] 1 containers: [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca]
	I0924 19:51:01.857235   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.861136   69576 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:51:01.861199   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:51:01.894133   69576 cri.go:89] found id: "b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:51:01.894156   69576 cri.go:89] found id: ""
	I0924 19:51:01.894165   69576 logs.go:276] 1 containers: [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4]
	I0924 19:51:01.894222   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.898001   69576 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:51:01.898073   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:51:01.933652   69576 cri.go:89] found id: "5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:51:01.933677   69576 cri.go:89] found id: ""
	I0924 19:51:01.933686   69576 logs.go:276] 1 containers: [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80]
	I0924 19:51:01.933762   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.938487   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:51:01.938549   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:51:01.979500   69576 cri.go:89] found id: "68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:01.979527   69576 cri.go:89] found id: ""
	I0924 19:51:01.979536   69576 logs.go:276] 1 containers: [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d]
	I0924 19:51:01.979597   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:01.983762   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:51:01.983827   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:51:02.024402   69576 cri.go:89] found id: "35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:51:02.024427   69576 cri.go:89] found id: ""
	I0924 19:51:02.024436   69576 logs.go:276] 1 containers: [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8]
	I0924 19:51:02.024501   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.028273   69576 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:51:02.028330   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:51:02.070987   69576 cri.go:89] found id: "b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:51:02.071006   69576 cri.go:89] found id: ""
	I0924 19:51:02.071013   69576 logs.go:276] 1 containers: [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8]
	I0924 19:51:02.071058   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.076176   69576 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:51:02.076244   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:51:02.119921   69576 cri.go:89] found id: ""
	I0924 19:51:02.119950   69576 logs.go:276] 0 containers: []
	W0924 19:51:02.119960   69576 logs.go:278] No container was found matching "kindnet"
	I0924 19:51:02.119967   69576 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 19:51:02.120026   69576 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 19:51:02.156531   69576 cri.go:89] found id: "50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:51:02.156562   69576 cri.go:89] found id: "daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:51:02.156568   69576 cri.go:89] found id: ""
	I0924 19:51:02.156577   69576 logs.go:276] 2 containers: [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba]
	I0924 19:51:02.156643   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.161262   69576 ssh_runner.go:195] Run: which crictl
	I0924 19:51:02.165581   69576 logs.go:123] Gathering logs for kube-controller-manager [b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8] ...
	I0924 19:51:02.165602   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6f32e0b22cfb48c483e522e510dc7b52aebf8779cdfee6b21ef139f19b9b7f8"
	I0924 19:51:02.216300   69576 logs.go:123] Gathering logs for storage-provisioner [50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d] ...
	I0924 19:51:02.216327   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50a3e972e70a2f7c12c2188d5fccc8dac91f6832f8537fe43fe93f1c2131154d"
	I0924 19:51:02.262879   69576 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:51:02.262909   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:50:59.490689   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:01.992004   69904 pod_ready.go:103] pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:02.984419   69904 pod_ready.go:82] duration metric: took 4m0.00033045s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" ...
	E0924 19:51:02.984461   69904 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-rgcll" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 19:51:02.984478   69904 pod_ready.go:39] duration metric: took 4m13.271652912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:02.984508   69904 kubeadm.go:597] duration metric: took 4m21.208228185s to restartPrimaryControlPlane
	W0924 19:51:02.984576   69904 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:02.984610   69904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:02.643876   69576 logs.go:123] Gathering logs for coredns [5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80] ...
	I0924 19:51:02.643917   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5701cbef602b06de690f042e0511346ff74c3d698dbbd358deaaf24dba72ba80"
	I0924 19:51:02.680131   69576 logs.go:123] Gathering logs for dmesg ...
	I0924 19:51:02.680170   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:51:02.693192   69576 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:51:02.693225   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 19:51:02.788649   69576 logs.go:123] Gathering logs for kube-apiserver [8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca] ...
	I0924 19:51:02.788678   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c6b0840dab2d27ee2b9f2750fd909f8d96400acdcec047f7a6cbda376c3f8ca"
	I0924 19:51:02.836539   69576 logs.go:123] Gathering logs for etcd [b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4] ...
	I0924 19:51:02.836571   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b09b340cd637af46938604c83a744d45afff67473005f3283069a613888a93d4"
	I0924 19:51:02.889363   69576 logs.go:123] Gathering logs for kube-scheduler [68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d] ...
	I0924 19:51:02.889393   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68e60ea512c889aa3eaf6fdcd9335a4daeb077be62351c190b25a3d852495a5d"
	I0924 19:51:02.925388   69576 logs.go:123] Gathering logs for kube-proxy [35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8] ...
	I0924 19:51:02.925416   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35d91507f646a466d67f0255473f37818e383e1c35133d50b2c9f45ffc3b80c8"
	I0924 19:51:02.962512   69576 logs.go:123] Gathering logs for storage-provisioner [daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba] ...
	I0924 19:51:02.962545   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daabc8f3d80f5d3fc06e9217cba28ab99a21bba6679ecadb6c09db863c7f22ba"
	I0924 19:51:02.999119   69576 logs.go:123] Gathering logs for kubelet ...
	I0924 19:51:02.999144   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:51:03.072647   69576 logs.go:123] Gathering logs for container status ...
	I0924 19:51:03.072683   69576 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:51:05.629114   69576 system_pods.go:59] 8 kube-system pods found
	I0924 19:51:05.629141   69576 system_pods.go:61] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running
	I0924 19:51:05.629145   69576 system_pods.go:61] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running
	I0924 19:51:05.629149   69576 system_pods.go:61] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running
	I0924 19:51:05.629153   69576 system_pods.go:61] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running
	I0924 19:51:05.629156   69576 system_pods.go:61] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running
	I0924 19:51:05.629159   69576 system_pods.go:61] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running
	I0924 19:51:05.629164   69576 system_pods.go:61] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:05.629169   69576 system_pods.go:61] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:51:05.629177   69576 system_pods.go:74] duration metric: took 3.814063168s to wait for pod list to return data ...
	I0924 19:51:05.629183   69576 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:51:05.632105   69576 default_sa.go:45] found service account: "default"
	I0924 19:51:05.632126   69576 default_sa.go:55] duration metric: took 2.937635ms for default service account to be created ...
	I0924 19:51:05.632133   69576 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:51:05.637121   69576 system_pods.go:86] 8 kube-system pods found
	I0924 19:51:05.637152   69576 system_pods.go:89] "coredns-7c65d6cfc9-qb2mm" [d38dedd6-6361-419c-891d-e5a5189776db] Running
	I0924 19:51:05.637160   69576 system_pods.go:89] "etcd-no-preload-965745" [8351cb5e-74cf-4341-abe2-4d1879d4e8c0] Running
	I0924 19:51:05.637167   69576 system_pods.go:89] "kube-apiserver-no-preload-965745" [301d3b9c-d776-4587-9493-8293026ea494] Running
	I0924 19:51:05.637174   69576 system_pods.go:89] "kube-controller-manager-no-preload-965745" [3811331c-e7fc-4bbf-8b96-5ff9bb6ca23b] Running
	I0924 19:51:05.637179   69576 system_pods.go:89] "kube-proxy-ng8vf" [7520fc22-94af-4575-8df7-4476677d1093] Running
	I0924 19:51:05.637185   69576 system_pods.go:89] "kube-scheduler-no-preload-965745" [8ba49896-c4e8-45da-bb45-f06493ac7405] Running
	I0924 19:51:05.637196   69576 system_pods.go:89] "metrics-server-6867b74b74-w7bfj" [52962ba3-838e-4cb9-9349-ca3760633a12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:05.637205   69576 system_pods.go:89] "storage-provisioner" [f25f7a78-bc14-4613-aed5-ab00c8d39366] Running
	I0924 19:51:05.637214   69576 system_pods.go:126] duration metric: took 5.075319ms to wait for k8s-apps to be running ...
	I0924 19:51:05.637222   69576 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:51:05.637264   69576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:05.654706   69576 system_svc.go:56] duration metric: took 17.472783ms WaitForService to wait for kubelet
	I0924 19:51:05.654809   69576 kubeadm.go:582] duration metric: took 4m23.459841471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:51:05.654865   69576 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:51:05.658334   69576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:51:05.658353   69576 node_conditions.go:123] node cpu capacity is 2
	I0924 19:51:05.658363   69576 node_conditions.go:105] duration metric: took 3.492035ms to run NodePressure ...
	I0924 19:51:05.658373   69576 start.go:241] waiting for startup goroutines ...
	I0924 19:51:05.658379   69576 start.go:246] waiting for cluster config update ...
	I0924 19:51:05.658389   69576 start.go:255] writing updated cluster config ...
	I0924 19:51:05.658691   69576 ssh_runner.go:195] Run: rm -f paused
	I0924 19:51:05.706059   69576 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:51:05.708303   69576 out.go:177] * Done! kubectl is now configured to use "no-preload-965745" cluster and "default" namespace by default
	I0924 19:51:01.454367   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:03.954114   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:05.955269   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:01.696298   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:01.709055   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:51:01.709132   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:51:01.741383   70152 cri.go:89] found id: ""
	I0924 19:51:01.741409   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.741416   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:51:01.741422   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:51:01.741476   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:51:01.773123   70152 cri.go:89] found id: ""
	I0924 19:51:01.773148   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.773156   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:51:01.773162   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:51:01.773221   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:51:01.806752   70152 cri.go:89] found id: ""
	I0924 19:51:01.806784   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.806792   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:51:01.806798   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:51:01.806928   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:51:01.851739   70152 cri.go:89] found id: ""
	I0924 19:51:01.851769   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.851780   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:51:01.851786   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:51:01.851850   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:51:01.885163   70152 cri.go:89] found id: ""
	I0924 19:51:01.885192   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.885201   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:51:01.885207   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:51:01.885255   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:51:01.918891   70152 cri.go:89] found id: ""
	I0924 19:51:01.918918   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.918929   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:51:01.918936   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:51:01.918996   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:51:01.953367   70152 cri.go:89] found id: ""
	I0924 19:51:01.953394   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.953403   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:51:01.953411   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:51:01.953468   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:51:01.993937   70152 cri.go:89] found id: ""
	I0924 19:51:01.993961   70152 logs.go:276] 0 containers: []
	W0924 19:51:01.993970   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:51:01.993981   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:51:01.993993   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:51:02.049467   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:51:02.049503   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:51:02.065074   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:51:02.065117   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:51:02.141811   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:51:02.141837   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:51:02.141852   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:51:02.224507   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:51:02.224534   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 19:51:04.766806   70152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:04.779518   70152 kubeadm.go:597] duration metric: took 4m3.458373s to restartPrimaryControlPlane
	W0924 19:51:04.779588   70152 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:04.779617   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:09.285959   70152 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.506320559s)
	I0924 19:51:09.286033   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:09.299784   70152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:51:09.311238   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:51:09.320580   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:51:09.320603   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:51:09.320658   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:51:09.329216   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:51:09.329281   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:51:09.337964   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:51:09.346324   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:51:09.346383   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:51:09.354788   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:51:09.363191   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:51:09.363249   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:51:09.372141   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:51:09.380290   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:51:09.380344   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:51:09.388996   70152 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:51:09.456034   70152 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:51:09.456144   70152 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:51:09.585473   70152 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:51:09.585697   70152 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:51:09.585935   70152 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:51:09.749623   70152 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:51:09.751504   70152 out.go:235]   - Generating certificates and keys ...
	I0924 19:51:09.751599   70152 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:51:09.751702   70152 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:51:09.751845   70152 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:51:09.751955   70152 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:51:09.752059   70152 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:51:09.752137   70152 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:51:09.752237   70152 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:51:09.752332   70152 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:51:09.752430   70152 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:51:09.752536   70152 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:51:09.752602   70152 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:51:09.752683   70152 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:51:09.881554   70152 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:51:10.269203   70152 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:51:10.518480   70152 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:51:10.712060   70152 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:51:10.727454   70152 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:51:10.728411   70152 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:51:10.728478   70152 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:51:10.849448   70152 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:51:08.454350   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:10.455005   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:10.851100   70152 out.go:235]   - Booting up control plane ...
	I0924 19:51:10.851237   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:51:10.860097   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:51:10.860987   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:51:10.861716   70152 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:51:10.863845   70152 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:51:12.954243   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:14.957843   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:17.453731   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:19.453953   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:21.454522   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:23.455166   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:25.953843   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:29.077330   69904 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.092691625s)
	I0924 19:51:29.077484   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:29.091493   69904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:51:29.101026   69904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:51:29.109749   69904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:51:29.109768   69904 kubeadm.go:157] found existing configuration files:
	
	I0924 19:51:29.109814   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 19:51:29.118177   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:51:29.118225   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:51:29.126963   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 19:51:29.135458   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:51:29.135514   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:51:29.144373   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 19:51:29.153026   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:51:29.153104   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:51:29.162719   69904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 19:51:29.171667   69904 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:51:29.171722   69904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:51:29.180370   69904 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:51:29.220747   69904 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 19:51:29.220873   69904 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:51:29.319144   69904 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:51:29.319289   69904 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:51:29.319416   69904 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 19:51:29.328410   69904 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:51:29.329855   69904 out.go:235]   - Generating certificates and keys ...
	I0924 19:51:29.329956   69904 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:51:29.330042   69904 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:51:29.330148   69904 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:51:29.330251   69904 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:51:29.330369   69904 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:51:29.330451   69904 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:51:29.330557   69904 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:51:29.330668   69904 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:51:29.330772   69904 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:51:29.330900   69904 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:51:29.330966   69904 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:51:29.331042   69904 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:51:29.504958   69904 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:51:29.642370   69904 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 19:51:29.735556   69904 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:51:29.870700   69904 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:51:30.048778   69904 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:51:30.049481   69904 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:51:30.052686   69904 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:51:27.954118   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:29.955223   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:30.054684   69904 out.go:235]   - Booting up control plane ...
	I0924 19:51:30.054786   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:51:30.054935   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:51:30.055710   69904 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:51:30.073679   69904 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:51:30.079375   69904 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:51:30.079437   69904 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:51:30.208692   69904 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 19:51:30.208799   69904 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 19:51:31.210485   69904 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001878491s
	I0924 19:51:31.210602   69904 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 19:51:35.712648   69904 kubeadm.go:310] [api-check] The API server is healthy after 4.501942024s
	I0924 19:51:35.726167   69904 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 19:51:35.745115   69904 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 19:51:35.778631   69904 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 19:51:35.778910   69904 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-093771 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 19:51:35.793809   69904 kubeadm.go:310] [bootstrap-token] Using token: joc3du.4csctmt42s6jz0an
	I0924 19:51:31.955402   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:33.956250   69408 pod_ready.go:103] pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:35.949705   69408 pod_ready.go:82] duration metric: took 4m0.001155579s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" ...
	E0924 19:51:35.949733   69408 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-jfrhm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 19:51:35.949755   69408 pod_ready.go:39] duration metric: took 4m8.530526042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:35.949787   69408 kubeadm.go:597] duration metric: took 4m16.768464943s to restartPrimaryControlPlane
	W0924 19:51:35.949874   69408 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 19:51:35.949908   69408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:51:35.795255   69904 out.go:235]   - Configuring RBAC rules ...
	I0924 19:51:35.795389   69904 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 19:51:35.800809   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 19:51:35.819531   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 19:51:35.825453   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 19:51:35.831439   69904 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 19:51:35.835651   69904 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 19:51:36.119903   69904 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 19:51:36.554891   69904 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 19:51:37.120103   69904 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 19:51:37.121012   69904 kubeadm.go:310] 
	I0924 19:51:37.121125   69904 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 19:51:37.121146   69904 kubeadm.go:310] 
	I0924 19:51:37.121242   69904 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 19:51:37.121260   69904 kubeadm.go:310] 
	I0924 19:51:37.121309   69904 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 19:51:37.121403   69904 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 19:51:37.121469   69904 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 19:51:37.121477   69904 kubeadm.go:310] 
	I0924 19:51:37.121557   69904 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 19:51:37.121578   69904 kubeadm.go:310] 
	I0924 19:51:37.121659   69904 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 19:51:37.121674   69904 kubeadm.go:310] 
	I0924 19:51:37.121765   69904 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 19:51:37.121891   69904 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 19:51:37.122002   69904 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 19:51:37.122013   69904 kubeadm.go:310] 
	I0924 19:51:37.122122   69904 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 19:51:37.122239   69904 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 19:51:37.122247   69904 kubeadm.go:310] 
	I0924 19:51:37.122333   69904 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token joc3du.4csctmt42s6jz0an \
	I0924 19:51:37.122470   69904 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 19:51:37.122509   69904 kubeadm.go:310] 	--control-plane 
	I0924 19:51:37.122520   69904 kubeadm.go:310] 
	I0924 19:51:37.122598   69904 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 19:51:37.122606   69904 kubeadm.go:310] 
	I0924 19:51:37.122720   69904 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token joc3du.4csctmt42s6jz0an \
	I0924 19:51:37.122884   69904 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 19:51:37.124443   69904 kubeadm.go:310] W0924 19:51:29.206815    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:51:37.124730   69904 kubeadm.go:310] W0924 19:51:29.207506    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:51:37.124872   69904 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:51:37.124908   69904 cni.go:84] Creating CNI manager for ""
	I0924 19:51:37.124921   69904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:51:37.126897   69904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:51:37.128457   69904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:51:37.138516   69904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:51:37.154747   69904 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:51:37.154812   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:37.154860   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-093771 minikube.k8s.io/updated_at=2024_09_24T19_51_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=default-k8s-diff-port-093771 minikube.k8s.io/primary=true
	I0924 19:51:37.178892   69904 ops.go:34] apiserver oom_adj: -16
	I0924 19:51:37.364019   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:37.864960   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:38.364223   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:38.864189   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:39.365144   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:39.864326   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:40.364143   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:40.864333   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:41.364236   69904 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:51:41.461496   69904 kubeadm.go:1113] duration metric: took 4.30674912s to wait for elevateKubeSystemPrivileges
	I0924 19:51:41.461536   69904 kubeadm.go:394] duration metric: took 4m59.728895745s to StartCluster
	I0924 19:51:41.461557   69904 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:51:41.461654   69904 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:51:41.464153   69904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:51:41.464416   69904 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.116 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:51:41.464620   69904 config.go:182] Loaded profile config "default-k8s-diff-port-093771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:51:41.464553   69904 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:51:41.464699   69904 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464718   69904 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-093771"
	I0924 19:51:41.464722   69904 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464753   69904 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-093771"
	I0924 19:51:41.464753   69904 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-093771"
	I0924 19:51:41.464774   69904 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-093771"
	W0924 19:51:41.464786   69904 addons.go:243] addon metrics-server should already be in state true
	I0924 19:51:41.464824   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	W0924 19:51:41.464729   69904 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:51:41.464894   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	I0924 19:51:41.465192   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465211   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465211   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.465242   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.465280   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.465229   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.466016   69904 out.go:177] * Verifying Kubernetes components...
	I0924 19:51:41.467370   69904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:51:41.480937   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0924 19:51:41.481105   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46867
	I0924 19:51:41.481377   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.481596   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.482008   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.482032   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.482119   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.482139   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.482420   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.482453   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.482636   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.483038   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.483079   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.484535   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
	I0924 19:51:41.486427   69904 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-093771"
	W0924 19:51:41.486572   69904 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:51:41.486612   69904 host.go:66] Checking if "default-k8s-diff-port-093771" exists ...
	I0924 19:51:41.486941   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.487097   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.487145   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.487517   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.487536   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.487866   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.488447   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.488493   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.502934   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0924 19:51:41.503244   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45593
	I0924 19:51:41.503446   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.503810   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.503904   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.503920   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.504266   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.504281   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.504327   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.504742   69904 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:51:41.504768   69904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:51:41.505104   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.505295   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.508446   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I0924 19:51:41.508449   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.508839   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.509365   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.509388   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.509739   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.509898   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.510390   69904 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:51:41.511622   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.511801   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:51:41.511818   69904 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:51:41.511838   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.513430   69904 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:51:41.514819   69904 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:51:41.514853   69904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:51:41.514871   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.515131   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.515838   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.515903   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.515983   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.516096   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.516270   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.516423   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.518636   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.519167   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.519192   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.519477   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.519709   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.519885   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.520037   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.522168   69904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I0924 19:51:41.522719   69904 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:51:41.523336   69904 main.go:141] libmachine: Using API Version  1
	I0924 19:51:41.523360   69904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:51:41.523663   69904 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:51:41.523857   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetState
	I0924 19:51:41.525469   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .DriverName
	I0924 19:51:41.525702   69904 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:51:41.525718   69904 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:51:41.525738   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHHostname
	I0924 19:51:41.528613   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.529122   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:4a:f5", ip: ""} in network mk-default-k8s-diff-port-093771: {Iface:virbr4 ExpiryTime:2024-09-24 20:46:27 +0000 UTC Type:0 Mac:52:54:00:21:4a:f5 Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:default-k8s-diff-port-093771 Clientid:01:52:54:00:21:4a:f5}
	I0924 19:51:41.529142   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | domain default-k8s-diff-port-093771 has defined IP address 192.168.50.116 and MAC address 52:54:00:21:4a:f5 in network mk-default-k8s-diff-port-093771
	I0924 19:51:41.529384   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHPort
	I0924 19:51:41.529572   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHKeyPath
	I0924 19:51:41.529764   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .GetSSHUsername
	I0924 19:51:41.529913   69904 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/default-k8s-diff-port-093771/id_rsa Username:docker}
	I0924 19:51:41.666584   69904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:51:41.685485   69904 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-093771" to be "Ready" ...
	I0924 19:51:41.701712   69904 node_ready.go:49] node "default-k8s-diff-port-093771" has status "Ready":"True"
	I0924 19:51:41.701735   69904 node_ready.go:38] duration metric: took 16.218729ms for node "default-k8s-diff-port-093771" to be "Ready" ...
	I0924 19:51:41.701745   69904 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:41.732271   69904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:41.759846   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:51:41.850208   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:51:41.854353   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:51:41.854372   69904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:51:41.884080   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:51:41.884109   69904 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:51:41.924130   69904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:51:41.924161   69904 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:51:41.956667   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.956699   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.957030   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.957044   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.957051   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.957058   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.957319   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.957378   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.957353   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:41.964614   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:41.964632   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:41.964934   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:41.964953   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:41.988158   69904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:51:42.871520   69904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.021277105s)
	I0924 19:51:42.871575   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:42.871586   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:42.871871   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:42.871892   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:42.871905   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:42.871918   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:42.872184   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:42.872237   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:42.872259   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:43.106973   69904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.118760493s)
	I0924 19:51:43.107032   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:43.107047   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:43.107342   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) DBG | Closing plugin on server side
	I0924 19:51:43.107375   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:43.107389   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:43.107403   69904 main.go:141] libmachine: Making call to close driver server
	I0924 19:51:43.107414   69904 main.go:141] libmachine: (default-k8s-diff-port-093771) Calling .Close
	I0924 19:51:43.107682   69904 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:51:43.107697   69904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:51:43.107715   69904 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-093771"
	I0924 19:51:43.109818   69904 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0924 19:51:43.111542   69904 addons.go:510] duration metric: took 1.646997004s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0924 19:51:43.738989   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:45.738584   69904 pod_ready.go:93] pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:45.738610   69904 pod_ready.go:82] duration metric: took 4.006305736s for pod "coredns-7c65d6cfc9-87t62" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:45.738622   69904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:47.746429   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:50.864744   70152 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:51:50.865098   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:51:50.865318   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:51:50.245581   69904 pod_ready.go:103] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"False"
	I0924 19:51:51.745840   69904 pod_ready.go:93] pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.745870   69904 pod_ready.go:82] duration metric: took 6.007240203s for pod "coredns-7c65d6cfc9-nzssp" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.745888   69904 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.754529   69904 pod_ready.go:93] pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.754556   69904 pod_ready.go:82] duration metric: took 8.660403ms for pod "etcd-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.754569   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.764561   69904 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.764589   69904 pod_ready.go:82] duration metric: took 10.010012ms for pod "kube-apiserver-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.764603   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.771177   69904 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.771205   69904 pod_ready.go:82] duration metric: took 6.593267ms for pod "kube-controller-manager-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.771218   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5rw7b" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.775929   69904 pod_ready.go:93] pod "kube-proxy-5rw7b" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:51.775952   69904 pod_ready.go:82] duration metric: took 4.726185ms for pod "kube-proxy-5rw7b" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:51.775964   69904 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:52.143343   69904 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace has status "Ready":"True"
	I0924 19:51:52.143367   69904 pod_ready.go:82] duration metric: took 367.395759ms for pod "kube-scheduler-default-k8s-diff-port-093771" in "kube-system" namespace to be "Ready" ...
	I0924 19:51:52.143375   69904 pod_ready.go:39] duration metric: took 10.441621626s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:51:52.143388   69904 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:51:52.143433   69904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:51:52.157316   69904 api_server.go:72] duration metric: took 10.69286406s to wait for apiserver process to appear ...
	I0924 19:51:52.157344   69904 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:51:52.157363   69904 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8444/healthz ...
	I0924 19:51:52.162550   69904 api_server.go:279] https://192.168.50.116:8444/healthz returned 200:
	ok
	I0924 19:51:52.163431   69904 api_server.go:141] control plane version: v1.31.1
	I0924 19:51:52.163453   69904 api_server.go:131] duration metric: took 6.102223ms to wait for apiserver health ...
	I0924 19:51:52.163465   69904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:51:52.346998   69904 system_pods.go:59] 9 kube-system pods found
	I0924 19:51:52.347026   69904 system_pods.go:61] "coredns-7c65d6cfc9-87t62" [b4be73eb-defb-4cc1-84f7-d34dccab4a2c] Running
	I0924 19:51:52.347031   69904 system_pods.go:61] "coredns-7c65d6cfc9-nzssp" [ecf276cd-9aa0-4a0b-81b6-da38271d10ed] Running
	I0924 19:51:52.347036   69904 system_pods.go:61] "etcd-default-k8s-diff-port-093771" [809f2c90-7cfc-4c77-a078-7883a7c6f2ac] Running
	I0924 19:51:52.347039   69904 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-093771" [2d297125-52bd-4c17-ab57-89911bb046e7] Running
	I0924 19:51:52.347043   69904 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-093771" [9e3c3d16-5e5d-4ebf-9ade-24cb40b9e836] Running
	I0924 19:51:52.347046   69904 system_pods.go:61] "kube-proxy-5rw7b" [f2916b6c-1a6f-4766-8543-0d846f559710] Running
	I0924 19:51:52.347049   69904 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-093771" [d1db09ad-d2e9-4453-b354-379bbb4081bf] Running
	I0924 19:51:52.347055   69904 system_pods.go:61] "metrics-server-6867b74b74-gnlkd" [a3b6c4f7-47e1-48a3-adff-1690db5cea3b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:52.347059   69904 system_pods.go:61] "storage-provisioner" [591605b2-de7e-4dc1-903b-f8102ccc3770] Running
	I0924 19:51:52.347067   69904 system_pods.go:74] duration metric: took 183.595946ms to wait for pod list to return data ...
	I0924 19:51:52.347074   69904 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:51:52.542476   69904 default_sa.go:45] found service account: "default"
	I0924 19:51:52.542504   69904 default_sa.go:55] duration metric: took 195.421838ms for default service account to be created ...
	I0924 19:51:52.542514   69904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:51:52.747902   69904 system_pods.go:86] 9 kube-system pods found
	I0924 19:51:52.747936   69904 system_pods.go:89] "coredns-7c65d6cfc9-87t62" [b4be73eb-defb-4cc1-84f7-d34dccab4a2c] Running
	I0924 19:51:52.747943   69904 system_pods.go:89] "coredns-7c65d6cfc9-nzssp" [ecf276cd-9aa0-4a0b-81b6-da38271d10ed] Running
	I0924 19:51:52.747950   69904 system_pods.go:89] "etcd-default-k8s-diff-port-093771" [809f2c90-7cfc-4c77-a078-7883a7c6f2ac] Running
	I0924 19:51:52.747955   69904 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-093771" [2d297125-52bd-4c17-ab57-89911bb046e7] Running
	I0924 19:51:52.747961   69904 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-093771" [9e3c3d16-5e5d-4ebf-9ade-24cb40b9e836] Running
	I0924 19:51:52.747966   69904 system_pods.go:89] "kube-proxy-5rw7b" [f2916b6c-1a6f-4766-8543-0d846f559710] Running
	I0924 19:51:52.747971   69904 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-093771" [d1db09ad-d2e9-4453-b354-379bbb4081bf] Running
	I0924 19:51:52.747981   69904 system_pods.go:89] "metrics-server-6867b74b74-gnlkd" [a3b6c4f7-47e1-48a3-adff-1690db5cea3b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:51:52.747988   69904 system_pods.go:89] "storage-provisioner" [591605b2-de7e-4dc1-903b-f8102ccc3770] Running
	I0924 19:51:52.748002   69904 system_pods.go:126] duration metric: took 205.481542ms to wait for k8s-apps to be running ...
	I0924 19:51:52.748010   69904 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:51:52.748069   69904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:51:52.763092   69904 system_svc.go:56] duration metric: took 15.071727ms WaitForService to wait for kubelet
	I0924 19:51:52.763121   69904 kubeadm.go:582] duration metric: took 11.298674484s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:51:52.763141   69904 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:51:52.942890   69904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:51:52.942915   69904 node_conditions.go:123] node cpu capacity is 2
	I0924 19:51:52.942925   69904 node_conditions.go:105] duration metric: took 179.779826ms to run NodePressure ...
	I0924 19:51:52.942935   69904 start.go:241] waiting for startup goroutines ...
	I0924 19:51:52.942941   69904 start.go:246] waiting for cluster config update ...
	I0924 19:51:52.942951   69904 start.go:255] writing updated cluster config ...
	I0924 19:51:52.943201   69904 ssh_runner.go:195] Run: rm -f paused
	I0924 19:51:52.992952   69904 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:51:52.995076   69904 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-093771" cluster and "default" namespace by default
	I0924 19:51:55.865870   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:51:55.866074   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:52:02.110619   69408 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.160686078s)
	I0924 19:52:02.110702   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:52:02.124706   69408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 19:52:02.133983   69408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:52:02.142956   69408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:52:02.142980   69408 kubeadm.go:157] found existing configuration files:
	
	I0924 19:52:02.143027   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:52:02.151037   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:52:02.151101   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:52:02.160469   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:52:02.168827   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:52:02.168889   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:52:02.177644   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:52:02.186999   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:52:02.187064   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:52:02.195935   69408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:52:02.204688   69408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:52:02.204763   69408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:52:02.213564   69408 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:52:02.259426   69408 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 19:52:02.259587   69408 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:52:02.355605   69408 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:52:02.355774   69408 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:52:02.355928   69408 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 19:52:02.363355   69408 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:52:02.365307   69408 out.go:235]   - Generating certificates and keys ...
	I0924 19:52:02.365423   69408 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:52:02.365526   69408 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:52:02.365688   69408 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:52:02.365773   69408 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:52:02.365879   69408 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:52:02.365955   69408 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:52:02.366061   69408 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:52:02.366149   69408 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:52:02.366257   69408 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:52:02.366362   69408 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:52:02.366417   69408 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:52:02.366502   69408 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:52:02.551857   69408 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:52:02.836819   69408 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 19:52:03.096479   69408 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:52:03.209489   69408 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:52:03.274701   69408 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:52:03.275214   69408 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:52:03.277917   69408 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:52:03.279804   69408 out.go:235]   - Booting up control plane ...
	I0924 19:52:03.279909   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:52:03.280022   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:52:03.280130   69408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:52:03.297451   69408 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:52:03.304789   69408 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:52:03.304840   69408 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:52:03.423280   69408 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 19:52:03.423394   69408 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 19:52:03.925128   69408 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.985266ms
	I0924 19:52:03.925262   69408 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 19:52:05.866171   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:52:05.866441   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:52:08.429070   69408 kubeadm.go:310] [api-check] The API server is healthy after 4.502084393s
	I0924 19:52:08.439108   69408 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 19:52:08.455261   69408 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 19:52:08.479883   69408 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 19:52:08.480145   69408 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-311319 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 19:52:08.490294   69408 kubeadm.go:310] [bootstrap-token] Using token: ugx0qk.6i7lm67tfu0foozy
	I0924 19:52:08.491600   69408 out.go:235]   - Configuring RBAC rules ...
	I0924 19:52:08.491741   69408 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 19:52:08.496142   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 19:52:08.502704   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 19:52:08.508752   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 19:52:08.512088   69408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 19:52:08.515855   69408 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 19:52:08.837286   69408 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 19:52:09.278937   69408 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 19:52:09.835442   69408 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 19:52:09.836889   69408 kubeadm.go:310] 
	I0924 19:52:09.836953   69408 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 19:52:09.836967   69408 kubeadm.go:310] 
	I0924 19:52:09.837040   69408 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 19:52:09.837048   69408 kubeadm.go:310] 
	I0924 19:52:09.837068   69408 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 19:52:09.837117   69408 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 19:52:09.837167   69408 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 19:52:09.837174   69408 kubeadm.go:310] 
	I0924 19:52:09.837238   69408 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 19:52:09.837246   69408 kubeadm.go:310] 
	I0924 19:52:09.837297   69408 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 19:52:09.837307   69408 kubeadm.go:310] 
	I0924 19:52:09.837371   69408 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 19:52:09.837490   69408 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 19:52:09.837611   69408 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 19:52:09.837630   69408 kubeadm.go:310] 
	I0924 19:52:09.837706   69408 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 19:52:09.837774   69408 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 19:52:09.837780   69408 kubeadm.go:310] 
	I0924 19:52:09.837851   69408 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ugx0qk.6i7lm67tfu0foozy \
	I0924 19:52:09.837951   69408 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a \
	I0924 19:52:09.837979   69408 kubeadm.go:310] 	--control-plane 
	I0924 19:52:09.837992   69408 kubeadm.go:310] 
	I0924 19:52:09.838087   69408 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 19:52:09.838100   69408 kubeadm.go:310] 
	I0924 19:52:09.838204   69408 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ugx0qk.6i7lm67tfu0foozy \
	I0924 19:52:09.838325   69408 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce711a1ffa9d1a63510e9a33762d877f3d61b4dda4e766a607ec2e9c946ed79a 
	I0924 19:52:09.839629   69408 kubeadm.go:310] W0924 19:52:02.243473    2529 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:52:09.839919   69408 kubeadm.go:310] W0924 19:52:02.244730    2529 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 19:52:09.840040   69408 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:52:09.840056   69408 cni.go:84] Creating CNI manager for ""
	I0924 19:52:09.840067   69408 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 19:52:09.842039   69408 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 19:52:09.843562   69408 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 19:52:09.855620   69408 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 19:52:09.873291   69408 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 19:52:09.873381   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:09.873401   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-311319 minikube.k8s.io/updated_at=2024_09_24T19_52_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=embed-certs-311319 minikube.k8s.io/primary=true
	I0924 19:52:09.898351   69408 ops.go:34] apiserver oom_adj: -16
	I0924 19:52:10.043641   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:10.544445   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:11.043725   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:11.543862   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:12.043769   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:12.543723   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:13.044577   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:13.544545   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.043885   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.544454   69408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 19:52:14.663140   69408 kubeadm.go:1113] duration metric: took 4.789827964s to wait for elevateKubeSystemPrivileges
	I0924 19:52:14.663181   69408 kubeadm.go:394] duration metric: took 4m55.527467072s to StartCluster
	I0924 19:52:14.663202   69408 settings.go:142] acquiring lock: {Name:mkf5ca3281c4b6d52c64cf7a56dc9459ae48933b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:52:14.663295   69408 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:52:14.665852   69408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3751/kubeconfig: {Name:mkb5afdcec617ea8064f238ecbd1a5a25d08422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 19:52:14.666123   69408 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 19:52:14.666181   69408 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 19:52:14.666281   69408 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-311319"
	I0924 19:52:14.666302   69408 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-311319"
	I0924 19:52:14.666298   69408 addons.go:69] Setting default-storageclass=true in profile "embed-certs-311319"
	W0924 19:52:14.666315   69408 addons.go:243] addon storage-provisioner should already be in state true
	I0924 19:52:14.666324   69408 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-311319"
	I0924 19:52:14.666347   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.666357   69408 config.go:182] Loaded profile config "embed-certs-311319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:52:14.666407   69408 addons.go:69] Setting metrics-server=true in profile "embed-certs-311319"
	I0924 19:52:14.666424   69408 addons.go:234] Setting addon metrics-server=true in "embed-certs-311319"
	W0924 19:52:14.666432   69408 addons.go:243] addon metrics-server should already be in state true
	I0924 19:52:14.666462   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.666762   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666766   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666803   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.666863   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.666899   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.666937   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.667748   69408 out.go:177] * Verifying Kubernetes components...
	I0924 19:52:14.669166   69408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 19:52:14.684612   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39209
	I0924 19:52:14.684876   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0924 19:52:14.685146   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.685266   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.685645   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.685662   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.685689   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35475
	I0924 19:52:14.685786   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.685806   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.686027   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.686034   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.686125   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.686517   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.686559   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.686617   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.686617   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.686638   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.686666   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.687118   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.687348   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.690029   69408 addons.go:234] Setting addon default-storageclass=true in "embed-certs-311319"
	W0924 19:52:14.690047   69408 addons.go:243] addon default-storageclass should already be in state true
	I0924 19:52:14.690067   69408 host.go:66] Checking if "embed-certs-311319" exists ...
	I0924 19:52:14.690357   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.690389   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.705119   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0924 19:52:14.705473   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42153
	I0924 19:52:14.705613   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.705983   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.706260   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.706283   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.706433   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.706458   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.706673   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.706793   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.706937   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.706989   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.708118   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I0924 19:52:14.708552   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.708751   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.709269   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.709288   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.709312   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.709894   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.710364   69408 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19700-3751/.minikube/bin/docker-machine-driver-kvm2
	I0924 19:52:14.710405   69408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:52:14.710737   69408 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 19:52:14.710846   69408 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 19:52:14.711925   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 19:52:14.711937   69408 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 19:52:14.711951   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.712493   69408 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:52:14.712506   69408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 19:52:14.712521   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.716365   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716390   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.716402   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716511   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.716639   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.716738   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.716763   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.716820   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.717468   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.717490   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.717691   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.717856   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.718038   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.718356   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.729081   69408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43771
	I0924 19:52:14.729516   69408 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:52:14.730022   69408 main.go:141] libmachine: Using API Version  1
	I0924 19:52:14.730040   69408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:52:14.730363   69408 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:52:14.730541   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetState
	I0924 19:52:14.732272   69408 main.go:141] libmachine: (embed-certs-311319) Calling .DriverName
	I0924 19:52:14.732526   69408 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 19:52:14.732545   69408 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 19:52:14.732564   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHHostname
	I0924 19:52:14.735618   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.736196   69408 main.go:141] libmachine: (embed-certs-311319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:97:73", ip: ""} in network mk-embed-certs-311319: {Iface:virbr3 ExpiryTime:2024-09-24 20:47:04 +0000 UTC Type:0 Mac:52:54:00:2d:97:73 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:embed-certs-311319 Clientid:01:52:54:00:2d:97:73}
	I0924 19:52:14.736220   69408 main.go:141] libmachine: (embed-certs-311319) DBG | domain embed-certs-311319 has defined IP address 192.168.61.21 and MAC address 52:54:00:2d:97:73 in network mk-embed-certs-311319
	I0924 19:52:14.736269   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHPort
	I0924 19:52:14.736499   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHKeyPath
	I0924 19:52:14.736675   69408 main.go:141] libmachine: (embed-certs-311319) Calling .GetSSHUsername
	I0924 19:52:14.736823   69408 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/embed-certs-311319/id_rsa Username:docker}
	I0924 19:52:14.869932   69408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 19:52:14.906644   69408 node_ready.go:35] waiting up to 6m0s for node "embed-certs-311319" to be "Ready" ...
	I0924 19:52:14.914856   69408 node_ready.go:49] node "embed-certs-311319" has status "Ready":"True"
	I0924 19:52:14.914884   69408 node_ready.go:38] duration metric: took 8.205314ms for node "embed-certs-311319" to be "Ready" ...
	I0924 19:52:14.914893   69408 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:52:14.919969   69408 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:15.014078   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 19:52:15.014101   69408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 19:52:15.052737   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 19:52:15.064467   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 19:52:15.065858   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 19:52:15.065877   69408 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 19:52:15.137882   69408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:52:15.137902   69408 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 19:52:15.222147   69408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 19:52:15.331245   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.331279   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.331622   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.331647   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.331656   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.331664   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.331624   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:15.331894   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.331910   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.331898   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:15.339921   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:15.339937   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:15.340159   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:15.340203   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:15.340235   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.048748   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.048769   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.049094   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.049133   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.049144   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.049152   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.049159   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.049489   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.049524   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.049544   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.149500   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.149522   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.149817   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.149877   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.149903   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.149917   69408 main.go:141] libmachine: Making call to close driver server
	I0924 19:52:16.149926   69408 main.go:141] libmachine: (embed-certs-311319) Calling .Close
	I0924 19:52:16.150145   69408 main.go:141] libmachine: (embed-certs-311319) DBG | Closing plugin on server side
	I0924 19:52:16.150159   69408 main.go:141] libmachine: Successfully made call to close driver server
	I0924 19:52:16.150182   69408 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 19:52:16.150191   69408 addons.go:475] Verifying addon metrics-server=true in "embed-certs-311319"
	I0924 19:52:16.151648   69408 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0924 19:52:16.153171   69408 addons.go:510] duration metric: took 1.486993032s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0924 19:52:16.925437   69408 pod_ready.go:103] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"False"
	I0924 19:52:18.926343   69408 pod_ready.go:103] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"False"
	I0924 19:52:20.928047   69408 pod_ready.go:93] pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.928068   69408 pod_ready.go:82] duration metric: took 6.008077725s for pod "coredns-7c65d6cfc9-jsvdk" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.928076   69408 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.933100   69408 pod_ready.go:93] pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.933119   69408 pod_ready.go:82] duration metric: took 5.035858ms for pod "coredns-7c65d6cfc9-qgfvt" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.933127   69408 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.938200   69408 pod_ready.go:93] pod "etcd-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.938215   69408 pod_ready.go:82] duration metric: took 5.082837ms for pod "etcd-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.938223   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.942124   69408 pod_ready.go:93] pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.942143   69408 pod_ready.go:82] duration metric: took 3.912415ms for pod "kube-apiserver-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.942154   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.946306   69408 pod_ready.go:93] pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:20.946323   69408 pod_ready.go:82] duration metric: took 4.162782ms for pod "kube-controller-manager-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:20.946330   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h42s7" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.323768   69408 pod_ready.go:93] pod "kube-proxy-h42s7" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:21.323794   69408 pod_ready.go:82] duration metric: took 377.456852ms for pod "kube-proxy-h42s7" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.323806   69408 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.723714   69408 pod_ready.go:93] pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace has status "Ready":"True"
	I0924 19:52:21.723742   69408 pod_ready.go:82] duration metric: took 399.928048ms for pod "kube-scheduler-embed-certs-311319" in "kube-system" namespace to be "Ready" ...
	I0924 19:52:21.723752   69408 pod_ready.go:39] duration metric: took 6.808848583s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 19:52:21.723769   69408 api_server.go:52] waiting for apiserver process to appear ...
	I0924 19:52:21.723835   69408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:52:21.738273   69408 api_server.go:72] duration metric: took 7.072120167s to wait for apiserver process to appear ...
	I0924 19:52:21.738301   69408 api_server.go:88] waiting for apiserver healthz status ...
	I0924 19:52:21.738353   69408 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8443/healthz ...
	I0924 19:52:21.743391   69408 api_server.go:279] https://192.168.61.21:8443/healthz returned 200:
	ok
	I0924 19:52:21.744346   69408 api_server.go:141] control plane version: v1.31.1
	I0924 19:52:21.744361   69408 api_server.go:131] duration metric: took 6.053884ms to wait for apiserver health ...
	I0924 19:52:21.744368   69408 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 19:52:21.926453   69408 system_pods.go:59] 9 kube-system pods found
	I0924 19:52:21.926485   69408 system_pods.go:61] "coredns-7c65d6cfc9-jsvdk" [da741136-c1ce-436f-9df0-e447b067265f] Running
	I0924 19:52:21.926493   69408 system_pods.go:61] "coredns-7c65d6cfc9-qgfvt" [7e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74] Running
	I0924 19:52:21.926499   69408 system_pods.go:61] "etcd-embed-certs-311319" [543c64c6-453b-4d42-b6a8-5b25577b3b8a] Running
	I0924 19:52:21.926505   69408 system_pods.go:61] "kube-apiserver-embed-certs-311319" [c1cd4c65-07a6-4d53-8f1d-438a8efdcdfa] Running
	I0924 19:52:21.926510   69408 system_pods.go:61] "kube-controller-manager-embed-certs-311319" [eece1531-5f24-4853-9e91-ca29558f3b9d] Running
	I0924 19:52:21.926517   69408 system_pods.go:61] "kube-proxy-h42s7" [76930a49-6a8a-4d02-84b8-8e26f3196ac3] Running
	I0924 19:52:21.926522   69408 system_pods.go:61] "kube-scheduler-embed-certs-311319" [22d20361-552d-4443-bec2-e406919d2966] Running
	I0924 19:52:21.926531   69408 system_pods.go:61] "metrics-server-6867b74b74-xnwm4" [dc64f26b-e4a6-4692-83d5-e6c794c1b130] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:52:21.926540   69408 system_pods.go:61] "storage-provisioner" [766bdfe2-684a-47de-94fd-088795b60e2b] Running
	I0924 19:52:21.926551   69408 system_pods.go:74] duration metric: took 182.176397ms to wait for pod list to return data ...
	I0924 19:52:21.926562   69408 default_sa.go:34] waiting for default service account to be created ...
	I0924 19:52:22.123871   69408 default_sa.go:45] found service account: "default"
	I0924 19:52:22.123896   69408 default_sa.go:55] duration metric: took 197.328478ms for default service account to be created ...
	I0924 19:52:22.123911   69408 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 19:52:22.327585   69408 system_pods.go:86] 9 kube-system pods found
	I0924 19:52:22.327616   69408 system_pods.go:89] "coredns-7c65d6cfc9-jsvdk" [da741136-c1ce-436f-9df0-e447b067265f] Running
	I0924 19:52:22.327625   69408 system_pods.go:89] "coredns-7c65d6cfc9-qgfvt" [7e3f7256-9bcb-4be8-a3a8-fb57ee6c0c74] Running
	I0924 19:52:22.327630   69408 system_pods.go:89] "etcd-embed-certs-311319" [543c64c6-453b-4d42-b6a8-5b25577b3b8a] Running
	I0924 19:52:22.327636   69408 system_pods.go:89] "kube-apiserver-embed-certs-311319" [c1cd4c65-07a6-4d53-8f1d-438a8efdcdfa] Running
	I0924 19:52:22.327641   69408 system_pods.go:89] "kube-controller-manager-embed-certs-311319" [eece1531-5f24-4853-9e91-ca29558f3b9d] Running
	I0924 19:52:22.327647   69408 system_pods.go:89] "kube-proxy-h42s7" [76930a49-6a8a-4d02-84b8-8e26f3196ac3] Running
	I0924 19:52:22.327652   69408 system_pods.go:89] "kube-scheduler-embed-certs-311319" [22d20361-552d-4443-bec2-e406919d2966] Running
	I0924 19:52:22.327662   69408 system_pods.go:89] "metrics-server-6867b74b74-xnwm4" [dc64f26b-e4a6-4692-83d5-e6c794c1b130] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 19:52:22.327671   69408 system_pods.go:89] "storage-provisioner" [766bdfe2-684a-47de-94fd-088795b60e2b] Running
	I0924 19:52:22.327680   69408 system_pods.go:126] duration metric: took 203.762675ms to wait for k8s-apps to be running ...
	I0924 19:52:22.327687   69408 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 19:52:22.327741   69408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:52:22.340873   69408 system_svc.go:56] duration metric: took 13.177605ms WaitForService to wait for kubelet
	I0924 19:52:22.340903   69408 kubeadm.go:582] duration metric: took 7.674755249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 19:52:22.340925   69408 node_conditions.go:102] verifying NodePressure condition ...
	I0924 19:52:22.524647   69408 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 19:52:22.524670   69408 node_conditions.go:123] node cpu capacity is 2
	I0924 19:52:22.524679   69408 node_conditions.go:105] duration metric: took 183.74973ms to run NodePressure ...
	I0924 19:52:22.524688   69408 start.go:241] waiting for startup goroutines ...
	I0924 19:52:22.524695   69408 start.go:246] waiting for cluster config update ...
	I0924 19:52:22.524705   69408 start.go:255] writing updated cluster config ...
	I0924 19:52:22.524994   69408 ssh_runner.go:195] Run: rm -f paused
	I0924 19:52:22.571765   69408 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 19:52:22.574724   69408 out.go:177] * Done! kubectl is now configured to use "embed-certs-311319" cluster and "default" namespace by default
	I0924 19:52:25.866986   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:52:25.867227   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:05.868563   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:05.868798   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:05.868811   70152 kubeadm.go:310] 
	I0924 19:53:05.868866   70152 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:53:05.868927   70152 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:53:05.868936   70152 kubeadm.go:310] 
	I0924 19:53:05.868989   70152 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:53:05.869037   70152 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:53:05.869201   70152 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:53:05.869212   70152 kubeadm.go:310] 
	I0924 19:53:05.869332   70152 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:53:05.869380   70152 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:53:05.869433   70152 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:53:05.869442   70152 kubeadm.go:310] 
	I0924 19:53:05.869555   70152 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:53:05.869664   70152 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:53:05.869674   70152 kubeadm.go:310] 
	I0924 19:53:05.869792   70152 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:53:05.869900   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:53:05.870003   70152 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:53:05.870132   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:53:05.870172   70152 kubeadm.go:310] 
	I0924 19:53:05.870425   70152 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:53:05.870536   70152 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:53:05.870658   70152 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 19:53:05.870869   70152 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 19:53:05.870918   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 19:53:06.301673   70152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:53:06.316103   70152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 19:53:06.326362   70152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 19:53:06.326396   70152 kubeadm.go:157] found existing configuration files:
	
	I0924 19:53:06.326454   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 19:53:06.334687   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 19:53:06.334744   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 19:53:06.344175   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 19:53:06.352663   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 19:53:06.352725   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 19:53:06.361955   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 19:53:06.370584   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 19:53:06.370625   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 19:53:06.379590   70152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 19:53:06.388768   70152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 19:53:06.388825   70152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 19:53:06.397242   70152 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 19:53:06.469463   70152 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 19:53:06.469547   70152 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 19:53:06.606743   70152 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 19:53:06.606900   70152 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 19:53:06.607021   70152 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 19:53:06.778104   70152 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 19:53:06.780036   70152 out.go:235]   - Generating certificates and keys ...
	I0924 19:53:06.780148   70152 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 19:53:06.780241   70152 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 19:53:06.780359   70152 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 19:53:06.780451   70152 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 19:53:06.780578   70152 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 19:53:06.780654   70152 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 19:53:06.780753   70152 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 19:53:06.780852   70152 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 19:53:06.780972   70152 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 19:53:06.781119   70152 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 19:53:06.781178   70152 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 19:53:06.781254   70152 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 19:53:06.836315   70152 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 19:53:06.938657   70152 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 19:53:07.273070   70152 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 19:53:07.347309   70152 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 19:53:07.369112   70152 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 19:53:07.369777   70152 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 19:53:07.369866   70152 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 19:53:07.504122   70152 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 19:53:07.506006   70152 out.go:235]   - Booting up control plane ...
	I0924 19:53:07.506117   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 19:53:07.509132   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 19:53:07.509998   70152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 19:53:07.510662   70152 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 19:53:07.513856   70152 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 19:53:47.515377   70152 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 19:53:47.515684   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:47.515976   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:53:52.516646   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:53:52.516842   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:54:02.517539   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:54:02.517808   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:54:22.518364   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:54:22.518605   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:55:02.517378   70152 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 19:55:02.517642   70152 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 19:55:02.517672   70152 kubeadm.go:310] 
	I0924 19:55:02.517732   70152 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 19:55:02.517791   70152 kubeadm.go:310] 		timed out waiting for the condition
	I0924 19:55:02.517802   70152 kubeadm.go:310] 
	I0924 19:55:02.517880   70152 kubeadm.go:310] 	This error is likely caused by:
	I0924 19:55:02.517943   70152 kubeadm.go:310] 		- The kubelet is not running
	I0924 19:55:02.518090   70152 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 19:55:02.518102   70152 kubeadm.go:310] 
	I0924 19:55:02.518239   70152 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 19:55:02.518289   70152 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 19:55:02.518347   70152 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 19:55:02.518358   70152 kubeadm.go:310] 
	I0924 19:55:02.518488   70152 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 19:55:02.518565   70152 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 19:55:02.518572   70152 kubeadm.go:310] 
	I0924 19:55:02.518685   70152 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 19:55:02.518768   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 19:55:02.518891   70152 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 19:55:02.518991   70152 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 19:55:02.519010   70152 kubeadm.go:310] 
	I0924 19:55:02.519626   70152 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 19:55:02.519745   70152 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 19:55:02.519839   70152 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 19:55:02.519914   70152 kubeadm.go:394] duration metric: took 8m1.249852968s to StartCluster
	I0924 19:55:02.519952   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 19:55:02.520008   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 19:55:02.552844   70152 cri.go:89] found id: ""
	I0924 19:55:02.552880   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.552891   70152 logs.go:278] No container was found matching "kube-apiserver"
	I0924 19:55:02.552899   70152 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 19:55:02.552956   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 19:55:02.582811   70152 cri.go:89] found id: ""
	I0924 19:55:02.582858   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.582869   70152 logs.go:278] No container was found matching "etcd"
	I0924 19:55:02.582876   70152 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 19:55:02.582929   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 19:55:02.614815   70152 cri.go:89] found id: ""
	I0924 19:55:02.614858   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.614868   70152 logs.go:278] No container was found matching "coredns"
	I0924 19:55:02.614874   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 19:55:02.614920   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 19:55:02.644953   70152 cri.go:89] found id: ""
	I0924 19:55:02.644982   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.644991   70152 logs.go:278] No container was found matching "kube-scheduler"
	I0924 19:55:02.644998   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 19:55:02.645053   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 19:55:02.680419   70152 cri.go:89] found id: ""
	I0924 19:55:02.680448   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.680458   70152 logs.go:278] No container was found matching "kube-proxy"
	I0924 19:55:02.680466   70152 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 19:55:02.680525   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 19:55:02.713021   70152 cri.go:89] found id: ""
	I0924 19:55:02.713043   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.713051   70152 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 19:55:02.713057   70152 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 19:55:02.713118   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 19:55:02.748326   70152 cri.go:89] found id: ""
	I0924 19:55:02.748350   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.748358   70152 logs.go:278] No container was found matching "kindnet"
	I0924 19:55:02.748364   70152 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 19:55:02.748416   70152 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 19:55:02.780489   70152 cri.go:89] found id: ""
	I0924 19:55:02.780523   70152 logs.go:276] 0 containers: []
	W0924 19:55:02.780546   70152 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 19:55:02.780558   70152 logs.go:123] Gathering logs for kubelet ...
	I0924 19:55:02.780572   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 19:55:02.830514   70152 logs.go:123] Gathering logs for dmesg ...
	I0924 19:55:02.830550   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 19:55:02.845321   70152 logs.go:123] Gathering logs for describe nodes ...
	I0924 19:55:02.845349   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 19:55:02.909352   70152 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 19:55:02.909383   70152 logs.go:123] Gathering logs for CRI-O ...
	I0924 19:55:02.909399   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 19:55:03.033937   70152 logs.go:123] Gathering logs for container status ...
	I0924 19:55:03.033972   70152 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0924 19:55:03.070531   70152 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 19:55:03.070611   70152 out.go:270] * 
	W0924 19:55:03.070682   70152 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:55:03.070701   70152 out.go:270] * 
	W0924 19:55:03.071559   70152 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 19:55:03.074921   70152 out.go:201] 
	W0924 19:55:03.076106   70152 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 19:55:03.076150   70152 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 19:55:03.076180   70152 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 19:55:03.077787   70152 out.go:201] 
	
	
	==> CRI-O <==
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.878429815Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208395878406133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b32815e-1db2-4d08-bde1-e51ac783dd0a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.878985556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5db37d3-5fa8-44ec-900b-1e5473650846 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.879043480Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5db37d3-5fa8-44ec-900b-1e5473650846 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.879116234Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a5db37d3-5fa8-44ec-900b-1e5473650846 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.909854161Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8ec4a82-1bc3-432a-badc-402cc438d0b2 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.909959118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8ec4a82-1bc3-432a-badc-402cc438d0b2 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.911264457Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=285e6a76-86f3-4447-ab2f-1eae3cb29185 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.911629721Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208395911611251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=285e6a76-86f3-4447-ab2f-1eae3cb29185 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.912030849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a93feff-5f65-4c46-9bd5-bcae30cf6b92 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.912076435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a93feff-5f65-4c46-9bd5-bcae30cf6b92 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.912146221Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6a93feff-5f65-4c46-9bd5-bcae30cf6b92 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.941148787Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a48b093-2c89-4396-a358-eff3f67072e4 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.941222363Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a48b093-2c89-4396-a358-eff3f67072e4 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.942281677Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35660679-5b3b-4abd-a462-4a9eb050c144 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.942640211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208395942619461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35660679-5b3b-4abd-a462-4a9eb050c144 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.943239994Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a20ccc33-b5d0-48d4-8e3e-a962419ab30c name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.943305735Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a20ccc33-b5d0-48d4-8e3e-a962419ab30c name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.943339150Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a20ccc33-b5d0-48d4-8e3e-a962419ab30c name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.972417186Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d54619c1-25bf-405d-b5af-ea8c4f39eab9 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.972492293Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d54619c1-25bf-405d-b5af-ea8c4f39eab9 name=/runtime.v1.RuntimeService/Version
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.973508825Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd90d8af-a1d3-45d9-8079-e7dc07f29dce name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.973895708Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727208395973874416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd90d8af-a1d3-45d9-8079-e7dc07f29dce name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.974389019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b1fb870-39b4-4a68-b96f-1025e7227480 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.974455347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b1fb870-39b4-4a68-b96f-1025e7227480 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 20:06:35 old-k8s-version-510301 crio[623]: time="2024-09-24 20:06:35.974490607Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9b1fb870-39b4-4a68-b96f-1025e7227480 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep24 19:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048604] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037476] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.005649] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.876766] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.596648] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.634241] systemd-fstab-generator[549]: Ignoring "noauto" option for root device
	[  +0.054570] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058966] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.197243] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.130135] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.272038] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[Sep24 19:47] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.061152] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.778061] systemd-fstab-generator[996]: Ignoring "noauto" option for root device
	[ +15.063261] kauditd_printk_skb: 46 callbacks suppressed
	[Sep24 19:51] systemd-fstab-generator[5117]: Ignoring "noauto" option for root device
	[Sep24 19:53] systemd-fstab-generator[5405]: Ignoring "noauto" option for root device
	[  +0.064427] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:06:36 up 19 min,  0 users,  load average: 0.06, 0.03, 0.04
	Linux old-k8s-version-510301 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6927]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6927]: goroutine 156 [runnable]:
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6927]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000879180)
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6927]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6927]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6927]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6927]: goroutine 157 [select]:
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6927]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000948aa0, 0xc0003fe101, 0xc0006dd800, 0xc00094cbf0, 0xc00082b880, 0xc00082b840)
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6927]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6927]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0003fe120, 0x0, 0x0)
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6927]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6927]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000879180)
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6927]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6927]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6927]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Sep 24 20:06:32 old-k8s-version-510301 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 24 20:06:32 old-k8s-version-510301 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 24 20:06:32 old-k8s-version-510301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 140.
	Sep 24 20:06:32 old-k8s-version-510301 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 24 20:06:32 old-k8s-version-510301 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6935]: I0924 20:06:32.974795    6935 server.go:416] Version: v1.20.0
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6935]: I0924 20:06:32.975066    6935 server.go:837] Client rotation is on, will bootstrap in background
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6935]: I0924 20:06:32.976860    6935 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6935]: I0924 20:06:32.977957    6935 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 24 20:06:32 old-k8s-version-510301 kubelet[6935]: W0924 20:06:32.978142    6935 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510301 -n old-k8s-version-510301
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510301 -n old-k8s-version-510301: exit status 2 (226.152691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-510301" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (147.70s)

                                                
                                    

Test pass (256/318)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.37
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 8.51
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.58
22 TestOffline 82.16
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 131.66
31 TestAddons/serial/GCPAuth/Namespaces 0.13
35 TestAddons/parallel/InspektorGadget 12.01
38 TestAddons/parallel/CSI 51.3
39 TestAddons/parallel/Headlamp 13.1
40 TestAddons/parallel/CloudSpanner 5.5
41 TestAddons/parallel/LocalPath 56.22
42 TestAddons/parallel/NvidiaDevicePlugin 5.46
43 TestAddons/parallel/Yakd 11.76
44 TestAddons/StoppedEnableDisable 7.53
45 TestCertOptions 65.41
46 TestCertExpiration 298.45
48 TestForceSystemdFlag 79.41
49 TestForceSystemdEnv 60.67
51 TestKVMDriverInstallOrUpdate 5.43
55 TestErrorSpam/setup 41.64
56 TestErrorSpam/start 0.33
57 TestErrorSpam/status 0.7
58 TestErrorSpam/pause 1.46
59 TestErrorSpam/unpause 1.63
60 TestErrorSpam/stop 5.06
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 52.3
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 33.27
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.46
72 TestFunctional/serial/CacheCmd/cache/add_local 1.97
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 35.31
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 1.3
83 TestFunctional/serial/LogsFileCmd 1.28
84 TestFunctional/serial/InvalidService 3.98
86 TestFunctional/parallel/ConfigCmd 0.29
87 TestFunctional/parallel/DashboardCmd 11.78
88 TestFunctional/parallel/DryRun 0.28
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 1.03
94 TestFunctional/parallel/ServiceCmdConnect 6.73
95 TestFunctional/parallel/AddonsCmd 0.14
96 TestFunctional/parallel/PersistentVolumeClaim 44.18
98 TestFunctional/parallel/SSHCmd 0.41
99 TestFunctional/parallel/CpCmd 1.26
100 TestFunctional/parallel/MySQL 29.52
101 TestFunctional/parallel/FileSync 0.19
102 TestFunctional/parallel/CertSync 1.32
106 TestFunctional/parallel/NodeLabels 0.05
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
110 TestFunctional/parallel/License 0.21
111 TestFunctional/parallel/ServiceCmd/DeployApp 10.18
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
113 TestFunctional/parallel/ProfileCmd/profile_list 0.33
114 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
115 TestFunctional/parallel/MountCmd/any-port 8.5
116 TestFunctional/parallel/MountCmd/specific-port 1.64
117 TestFunctional/parallel/ServiceCmd/List 0.48
118 TestFunctional/parallel/ServiceCmd/JSONOutput 0.48
119 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
120 TestFunctional/parallel/MountCmd/VerifyCleanup 1.57
121 TestFunctional/parallel/ServiceCmd/Format 0.43
122 TestFunctional/parallel/ServiceCmd/URL 0.4
123 TestFunctional/parallel/Version/short 0.05
124 TestFunctional/parallel/Version/components 0.82
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.5
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
129 TestFunctional/parallel/ImageCommands/ImageBuild 6.52
130 TestFunctional/parallel/ImageCommands/Setup 1.53
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.03
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.25
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.6
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.25
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.35
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
147 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
152 TestFunctional/delete_echo-server_images 0.03
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 194.33
159 TestMultiControlPlane/serial/DeployApp 5.79
160 TestMultiControlPlane/serial/PingHostFromPods 1.12
161 TestMultiControlPlane/serial/AddWorkerNode 52.36
162 TestMultiControlPlane/serial/NodeLabels 0.08
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.81
164 TestMultiControlPlane/serial/CopyFile 12.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.18
170 TestMultiControlPlane/serial/DeleteSecondaryNode 16.45
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.58
173 TestMultiControlPlane/serial/RestartCluster 347.19
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.61
175 TestMultiControlPlane/serial/AddSecondaryNode 75.69
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.82
180 TestJSONOutput/start/Command 47.98
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.62
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.57
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 6.61
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.18
208 TestMainNoArgs 0.04
209 TestMinikubeProfile 84.17
212 TestMountStart/serial/StartWithMountFirst 30.81
213 TestMountStart/serial/VerifyMountFirst 0.36
214 TestMountStart/serial/StartWithMountSecond 25.51
215 TestMountStart/serial/VerifyMountSecond 0.37
216 TestMountStart/serial/DeleteFirst 0.7
217 TestMountStart/serial/VerifyMountPostDelete 0.37
218 TestMountStart/serial/Stop 1.28
219 TestMountStart/serial/RestartStopped 22.66
220 TestMountStart/serial/VerifyMountPostStop 0.36
223 TestMultiNode/serial/FreshStart2Nodes 133.22
224 TestMultiNode/serial/DeployApp2Nodes 4.97
225 TestMultiNode/serial/PingHostFrom2Pods 0.76
226 TestMultiNode/serial/AddNode 47.77
227 TestMultiNode/serial/MultiNodeLabels 0.06
228 TestMultiNode/serial/ProfileList 0.57
229 TestMultiNode/serial/CopyFile 6.9
230 TestMultiNode/serial/StopNode 2.17
231 TestMultiNode/serial/StartAfterStop 37.19
233 TestMultiNode/serial/DeleteNode 1.92
235 TestMultiNode/serial/RestartMultiNode 173.08
236 TestMultiNode/serial/ValidateNameConflict 41.4
243 TestScheduledStopUnix 110.71
247 TestRunningBinaryUpgrade 188.99
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
253 TestNoKubernetes/serial/StartWithK8s 91.05
254 TestStoppedBinaryUpgrade/Setup 0.72
255 TestStoppedBinaryUpgrade/Upgrade 117.89
256 TestNoKubernetes/serial/StartWithStopK8s 34.75
257 TestNoKubernetes/serial/Start 28.55
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
259 TestNoKubernetes/serial/ProfileList 27.68
260 TestNoKubernetes/serial/Stop 2.73
261 TestNoKubernetes/serial/StartNoArgs 21.66
262 TestStoppedBinaryUpgrade/MinikubeLogs 0.79
270 TestNetworkPlugins/group/false 3.85
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
283 TestPause/serial/Start 104.85
284 TestPause/serial/SecondStartNoReconfiguration 40.45
285 TestNetworkPlugins/group/auto/Start 58.2
286 TestNetworkPlugins/group/kindnet/Start 82.16
287 TestPause/serial/Pause 0.72
288 TestPause/serial/VerifyStatus 0.26
289 TestPause/serial/Unpause 0.75
290 TestPause/serial/PauseAgain 0.85
291 TestPause/serial/DeletePaused 0.86
292 TestPause/serial/VerifyDeletedResources 4.09
293 TestNetworkPlugins/group/flannel/Start 80.35
294 TestNetworkPlugins/group/auto/KubeletFlags 0.22
295 TestNetworkPlugins/group/auto/NetCatPod 13.24
296 TestNetworkPlugins/group/auto/DNS 0.15
297 TestNetworkPlugins/group/auto/Localhost 0.12
298 TestNetworkPlugins/group/auto/HairPin 0.14
299 TestNetworkPlugins/group/enable-default-cni/Start 91.01
300 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
301 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
302 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
303 TestNetworkPlugins/group/kindnet/DNS 0.16
304 TestNetworkPlugins/group/kindnet/Localhost 0.14
305 TestNetworkPlugins/group/kindnet/HairPin 0.13
306 TestNetworkPlugins/group/bridge/Start 91.14
307 TestNetworkPlugins/group/flannel/ControllerPod 6.01
308 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
309 TestNetworkPlugins/group/flannel/NetCatPod 11.27
310 TestNetworkPlugins/group/flannel/DNS 0.15
311 TestNetworkPlugins/group/flannel/Localhost 0.12
312 TestNetworkPlugins/group/flannel/HairPin 0.12
313 TestNetworkPlugins/group/custom-flannel/Start 75.47
314 TestNetworkPlugins/group/calico/Start 98.73
315 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
316 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.22
317 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
318 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
319 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
322 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
323 TestNetworkPlugins/group/bridge/NetCatPod 11.24
324 TestNetworkPlugins/group/bridge/DNS 0.15
325 TestNetworkPlugins/group/bridge/Localhost 0.12
326 TestNetworkPlugins/group/bridge/HairPin 0.12
327 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
328 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.23
329 TestNetworkPlugins/group/custom-flannel/DNS 0.2
330 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
331 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
333 TestStartStop/group/no-preload/serial/FirstStart 102.73
335 TestStartStop/group/embed-certs/serial/FirstStart 74.55
336 TestNetworkPlugins/group/calico/ControllerPod 6.01
337 TestNetworkPlugins/group/calico/KubeletFlags 0.2
338 TestNetworkPlugins/group/calico/NetCatPod 11.21
339 TestNetworkPlugins/group/calico/DNS 0.17
340 TestNetworkPlugins/group/calico/Localhost 0.16
341 TestNetworkPlugins/group/calico/HairPin 0.15
343 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 94.58
344 TestStartStop/group/embed-certs/serial/DeployApp 10.29
345 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.94
347 TestStartStop/group/no-preload/serial/DeployApp 9.27
348 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
350 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.89
356 TestStartStop/group/embed-certs/serial/SecondStart 661.72
358 TestStartStop/group/no-preload/serial/SecondStart 568.52
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 574.06
361 TestStartStop/group/old-k8s-version/serial/Stop 1.28
362 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
373 TestStartStop/group/newest-cni/serial/FirstStart 49.62
374 TestStartStop/group/newest-cni/serial/DeployApp 0
375 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.92
376 TestStartStop/group/newest-cni/serial/Stop 7.3
377 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
378 TestStartStop/group/newest-cni/serial/SecondStart 35.55
379 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
382 TestStartStop/group/newest-cni/serial/Pause 2.27
x
+
TestDownloadOnly/v1.20.0/json-events (11.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-366438 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-366438 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.369195667s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0924 18:20:02.301957   10949 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0924 18:20:02.302063   10949 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-366438
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-366438: exit status 85 (55.027316ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-366438 | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC |          |
	|         | -p download-only-366438        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:19:50
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:19:50.968703   10960 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:19:50.968857   10960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:19:50.968871   10960 out.go:358] Setting ErrFile to fd 2...
	I0924 18:19:50.968890   10960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:19:50.969359   10960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	W0924 18:19:50.969494   10960 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19700-3751/.minikube/config/config.json: open /home/jenkins/minikube-integration/19700-3751/.minikube/config/config.json: no such file or directory
	I0924 18:19:50.970076   10960 out.go:352] Setting JSON to true
	I0924 18:19:50.970981   10960 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":142,"bootTime":1727201849,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:19:50.971072   10960 start.go:139] virtualization: kvm guest
	I0924 18:19:50.973264   10960 out.go:97] [download-only-366438] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0924 18:19:50.973347   10960 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball: no such file or directory
	I0924 18:19:50.973379   10960 notify.go:220] Checking for updates...
	I0924 18:19:50.974729   10960 out.go:169] MINIKUBE_LOCATION=19700
	I0924 18:19:50.976088   10960 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:19:50.977367   10960 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:19:50.978519   10960 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:19:50.979847   10960 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0924 18:19:50.982308   10960 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0924 18:19:50.982504   10960 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:19:51.092093   10960 out.go:97] Using the kvm2 driver based on user configuration
	I0924 18:19:51.092119   10960 start.go:297] selected driver: kvm2
	I0924 18:19:51.092126   10960 start.go:901] validating driver "kvm2" against <nil>
	I0924 18:19:51.092460   10960 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:19:51.092583   10960 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 18:19:51.107307   10960 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 18:19:51.107371   10960 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 18:19:51.108081   10960 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0924 18:19:51.108272   10960 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 18:19:51.108301   10960 cni.go:84] Creating CNI manager for ""
	I0924 18:19:51.108335   10960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 18:19:51.108347   10960 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 18:19:51.108405   10960 start.go:340] cluster config:
	{Name:download-only-366438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-366438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:19:51.108617   10960 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:19:51.110490   10960 out.go:97] Downloading VM boot image ...
	I0924 18:19:51.110522   10960 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19700-3751/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 18:19:54.553895   10960 out.go:97] Starting "download-only-366438" primary control-plane node in "download-only-366438" cluster
	I0924 18:19:54.553927   10960 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 18:19:54.578674   10960 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 18:19:54.578701   10960 cache.go:56] Caching tarball of preloaded images
	I0924 18:19:54.578912   10960 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 18:19:54.580528   10960 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0924 18:19:54.580546   10960 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0924 18:19:54.605171   10960 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 18:20:00.627546   10960 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0924 18:20:00.627642   10960 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-366438 host does not exist
	  To start a cluster, run: "minikube start -p download-only-366438"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-366438
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (8.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-880989 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-880989 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.511826548s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (8.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0924 18:20:11.118032   10949 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0924 18:20:11.118080   10949 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-880989
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-880989: exit status 85 (55.650418ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-366438 | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC |                     |
	|         | -p download-only-366438        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| delete  | -p download-only-366438        | download-only-366438 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| start   | -o=json --download-only        | download-only-880989 | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	|         | -p download-only-880989        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:20:02
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:20:02.641683   11190 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:20:02.641928   11190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:20:02.641937   11190 out.go:358] Setting ErrFile to fd 2...
	I0924 18:20:02.641949   11190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:20:02.642118   11190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 18:20:02.642657   11190 out.go:352] Setting JSON to true
	I0924 18:20:02.643523   11190 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":154,"bootTime":1727201849,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:20:02.643619   11190 start.go:139] virtualization: kvm guest
	I0924 18:20:02.645647   11190 out.go:97] [download-only-880989] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 18:20:02.645772   11190 notify.go:220] Checking for updates...
	I0924 18:20:02.647148   11190 out.go:169] MINIKUBE_LOCATION=19700
	I0924 18:20:02.648449   11190 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:20:02.649806   11190 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:20:02.651053   11190 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:20:02.652214   11190 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0924 18:20:02.654086   11190 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0924 18:20:02.654281   11190 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:20:02.685836   11190 out.go:97] Using the kvm2 driver based on user configuration
	I0924 18:20:02.685863   11190 start.go:297] selected driver: kvm2
	I0924 18:20:02.685870   11190 start.go:901] validating driver "kvm2" against <nil>
	I0924 18:20:02.686368   11190 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:20:02.686450   11190 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19700-3751/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 18:20:02.701950   11190 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 18:20:02.702003   11190 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 18:20:02.702503   11190 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0924 18:20:02.702648   11190 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 18:20:02.702673   11190 cni.go:84] Creating CNI manager for ""
	I0924 18:20:02.702697   11190 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 18:20:02.702705   11190 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 18:20:02.702747   11190 start.go:340] cluster config:
	{Name:download-only-880989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-880989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:20:02.702869   11190 iso.go:125] acquiring lock: {Name:mke95c23e106ba13242358330b98be70d572c626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 18:20:02.704619   11190 out.go:97] Starting "download-only-880989" primary control-plane node in "download-only-880989" cluster
	I0924 18:20:02.704632   11190 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:20:02.763378   11190 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 18:20:02.763405   11190 cache.go:56] Caching tarball of preloaded images
	I0924 18:20:02.763544   11190 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 18:20:02.765190   11190 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0924 18:20:02.765209   11190 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0924 18:20:02.790733   11190 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19700-3751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-880989 host does not exist
	  To start a cluster, run: "minikube start -p download-only-880989"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-880989
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I0924 18:20:11.660726   10949 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-303583 --alsologtostderr --binary-mirror http://127.0.0.1:40655 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-303583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-303583
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (82.16s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-418979 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-418979 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m20.804276479s)
helpers_test.go:175: Cleaning up "offline-crio-418979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-418979
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-418979: (1.354050101s)
--- PASS: TestOffline (82.16s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-218885
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-218885: exit status 85 (50.208959ms)

                                                
                                                
-- stdout --
	* Profile "addons-218885" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-218885"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-218885
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-218885: exit status 85 (51.002985ms)

                                                
                                                
-- stdout --
	* Profile "addons-218885" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-218885"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (131.66s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-218885 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-218885 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m11.660262984s)
--- PASS: TestAddons/Setup (131.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-218885 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-218885 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.01s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6xm9n" [b4c1cd26-f7cc-4928-b1c9-7e2d0e0cc07e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00497174s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-218885
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-218885: (6.002066872s)
--- PASS: TestAddons/parallel/InspektorGadget (12.01s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.262576ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-218885 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-218885 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [39f4dcb9-dade-45dc-b26f-b8845fb7ae63] Pending
helpers_test.go:344: "task-pv-pod" [39f4dcb9-dade-45dc-b26f-b8845fb7ae63] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [39f4dcb9-dade-45dc-b26f-b8845fb7ae63] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.008111444s
addons_test.go:528: (dbg) Run:  kubectl --context addons-218885 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-218885 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-218885 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-218885 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-218885 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-218885 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-218885 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3ebce6dc-29aa-4c75-a46d-c173942202d3] Pending
helpers_test.go:344: "task-pv-pod-restore" [3ebce6dc-29aa-4c75-a46d-c173942202d3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3ebce6dc-29aa-4c75-a46d-c173942202d3] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004185015s
addons_test.go:570: (dbg) Run:  kubectl --context addons-218885 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-218885 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-218885 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-218885 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-218885 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.640585059s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-218885 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.30s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-218885 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-5nkmt" [59e6f1f0-361c-4bc4-bdad-ee140581d073] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-5nkmt" [59e6f1f0-361c-4bc4-bdad-ee140581d073] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-5nkmt" [59e6f1f0-361c-4bc4-bdad-ee140581d073] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004564313s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-218885 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (13.10s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-x6wlg" [f1823d56-3f3b-4741-8de4-5c38ebfb622e] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003803546s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-218885
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.22s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-218885 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-218885 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-218885 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e4137bbf-85db-4e98-85d2-28f5aa2f3dbd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e4137bbf-85db-4e98-85d2-28f5aa2f3dbd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e4137bbf-85db-4e98-85d2-28f5aa2f3dbd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003728139s
addons_test.go:938: (dbg) Run:  kubectl --context addons-218885 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-218885 ssh "cat /opt/local-path-provisioner/pvc-32fb6863-7fde-481e-85f8-da616d5f9350_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-218885 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-218885 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-218885 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-218885 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.427709797s)
--- PASS: TestAddons/parallel/LocalPath (56.22s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-qhkcp" [2d4afd4b-8f05-4a66-aecf-ac6db891b2a7] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004605429s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-218885
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.46s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
I0924 18:30:26.699612   10949 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-ddn7j" [f85cbdc8-5f55-4b11-8366-afd4d65dc2d6] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004408663s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-218885 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-218885 addons disable yakd --alsologtostderr -v=1: (5.751486256s)
--- PASS: TestAddons/parallel/Yakd (11.76s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (7.53s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-218885
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-218885: (7.272159146s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-218885
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-218885
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-218885
--- PASS: TestAddons/StoppedEnableDisable (7.53s)

                                                
                                    
x
+
TestCertOptions (65.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-103452 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0924 19:32:24.266268   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-103452 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m4.172136938s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-103452 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-103452 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-103452 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-103452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-103452
--- PASS: TestCertOptions (65.41s)

                                                
                                    
x
+
TestCertExpiration (298.45s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-563000 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-563000 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (57.468549831s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-563000 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-563000 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (59.870807212s)
helpers_test.go:175: Cleaning up "cert-expiration-563000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-563000
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-563000: (1.105591861s)
--- PASS: TestCertExpiration (298.45s)

                                                
                                    
x
+
TestForceSystemdFlag (79.41s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-166165 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-166165 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m18.236491733s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-166165 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-166165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-166165
--- PASS: TestForceSystemdFlag (79.41s)

                                                
                                    
x
+
TestForceSystemdEnv (60.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-940861 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-940861 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.662460672s)
helpers_test.go:175: Cleaning up "force-systemd-env-940861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-940861
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-940861: (1.002348862s)
--- PASS: TestForceSystemdEnv (60.67s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.43s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0924 19:30:38.187707   10949 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0924 19:30:38.187863   10949 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0924 19:30:38.216702   10949 install.go:62] docker-machine-driver-kvm2: exit status 1
W0924 19:30:38.217090   10949 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0924 19:30:38.217157   10949 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate811114061/001/docker-machine-driver-kvm2
I0924 19:30:38.671005   10949 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate811114061/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466e640 0x466e640 0x466e640 0x466e640 0x466e640 0x466e640 0x466e640] Decompressors:map[bz2:0xc000717c60 gz:0xc000717c68 tar:0xc000717be0 tar.bz2:0xc000717bf0 tar.gz:0xc000717c30 tar.xz:0xc000717c40 tar.zst:0xc000717c50 tbz2:0xc000717bf0 tgz:0xc000717c30 txz:0xc000717c40 tzst:0xc000717c50 xz:0xc000717c80 zip:0xc000717c90 zst:0xc000717c88] Getters:map[file:0xc001d3c510 http:0xc00014f0e0 https:0xc00014f540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0924 19:30:38.671067   10949 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate811114061/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.43s)

                                                
                                    
x
+
TestErrorSpam/setup (41.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-319009 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-319009 --driver=kvm2  --container-runtime=crio
E0924 18:37:24.266921   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:37:24.273291   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:37:24.284798   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:37:24.306149   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:37:24.347547   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:37:24.428966   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:37:24.590975   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-319009 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-319009 --driver=kvm2  --container-runtime=crio: (41.643636962s)
--- PASS: TestErrorSpam/setup (41.64s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 start --dry-run
E0924 18:37:24.912721   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 status
E0924 18:37:25.554477   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 pause
E0924 18:37:26.836383   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 unpause
--- PASS: TestErrorSpam/unpause (1.63s)

                                                
                                    
x
+
TestErrorSpam/stop (5.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 stop
E0924 18:37:29.398410   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 stop: (1.505804031s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 stop: (1.560491109s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-319009 --log_dir /tmp/nospam-319009 stop: (1.99489517s)
--- PASS: TestErrorSpam/stop (5.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19700-3751/.minikube/files/etc/test/nested/copy/10949/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-884668 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0924 18:37:34.519684   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:37:44.761596   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:38:05.243093   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-884668 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (52.294534772s)
--- PASS: TestFunctional/serial/StartWithProxy (52.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0924 18:38:26.771613   10949 config.go:182] Loaded profile config "functional-884668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-884668 --alsologtostderr -v=8
E0924 18:38:46.205052   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-884668 --alsologtostderr -v=8: (33.270864254s)
functional_test.go:663: soft start took 33.271621035s for "functional-884668" cluster.
I0924 18:39:00.042808   10949 config.go:182] Loaded profile config "functional-884668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (33.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-884668 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-884668 cache add registry.k8s.io/pause:3.1: (1.094738703s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-884668 cache add registry.k8s.io/pause:3.3: (1.210667718s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-884668 cache add registry.k8s.io/pause:latest: (1.156426765s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-884668 /tmp/TestFunctionalserialCacheCmdcacheadd_local728358694/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 cache add minikube-local-cache-test:functional-884668
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-884668 cache add minikube-local-cache-test:functional-884668: (1.650979661s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 cache delete minikube-local-cache-test:functional-884668
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-884668
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884668 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (203.084844ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-884668 cache reload: (1.025221855s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 kubectl -- --context functional-884668 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-884668 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.31s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-884668 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-884668 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.310732294s)
functional_test.go:761: restart took 35.310853808s for "functional-884668" cluster.
I0924 18:39:43.166856   10949 config.go:182] Loaded profile config "functional-884668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (35.31s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-884668 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-884668 logs: (1.298233883s)
--- PASS: TestFunctional/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 logs --file /tmp/TestFunctionalserialLogsFileCmd2821266986/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-884668 logs --file /tmp/TestFunctionalserialLogsFileCmd2821266986/001/logs.txt: (1.279050635s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.98s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-884668 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-884668
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-884668: exit status 115 (260.003777ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.232:32542 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-884668 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884668 config get cpus: exit status 14 (50.632039ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884668 config get cpus: exit status 14 (41.749602ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-884668 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-884668 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20655: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.78s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-884668 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-884668 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (147.676487ms)

                                                
                                                
-- stdout --
	* [functional-884668] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 18:39:52.411737   20543 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:39:52.411867   20543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:39:52.411877   20543 out.go:358] Setting ErrFile to fd 2...
	I0924 18:39:52.411883   20543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:39:52.412081   20543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 18:39:52.412721   20543 out.go:352] Setting JSON to false
	I0924 18:39:52.413971   20543 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1343,"bootTime":1727201849,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:39:52.414053   20543 start.go:139] virtualization: kvm guest
	I0924 18:39:52.416164   20543 out.go:177] * [functional-884668] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 18:39:52.417527   20543 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:39:52.417587   20543 notify.go:220] Checking for updates...
	I0924 18:39:52.420521   20543 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:39:52.422012   20543 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:39:52.423480   20543 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:39:52.425895   20543 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 18:39:52.427169   20543 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:39:52.428643   20543 config.go:182] Loaded profile config "functional-884668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:39:52.429176   20543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:39:52.429246   20543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:39:52.445322   20543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35989
	I0924 18:39:52.445787   20543 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:39:52.446311   20543 main.go:141] libmachine: Using API Version  1
	I0924 18:39:52.446334   20543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:39:52.446749   20543 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:39:52.446933   20543 main.go:141] libmachine: (functional-884668) Calling .DriverName
	I0924 18:39:52.447175   20543 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:39:52.447511   20543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:39:52.447550   20543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:39:52.464605   20543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42019
	I0924 18:39:52.465064   20543 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:39:52.465523   20543 main.go:141] libmachine: Using API Version  1
	I0924 18:39:52.465542   20543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:39:52.465862   20543 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:39:52.466015   20543 main.go:141] libmachine: (functional-884668) Calling .DriverName
	I0924 18:39:52.506974   20543 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 18:39:52.508301   20543 start.go:297] selected driver: kvm2
	I0924 18:39:52.508316   20543 start.go:901] validating driver "kvm2" against &{Name:functional-884668 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-884668 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.232 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:39:52.508465   20543 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:39:52.510841   20543 out.go:201] 
	W0924 18:39:52.512145   20543 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0924 18:39:52.514107   20543 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-884668 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-884668 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-884668 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (146.90277ms)

                                                
                                                
-- stdout --
	* [functional-884668] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 18:39:52.277569   20500 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:39:52.277714   20500 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:39:52.277727   20500 out.go:358] Setting ErrFile to fd 2...
	I0924 18:39:52.277734   20500 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:39:52.278110   20500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 18:39:52.278774   20500 out.go:352] Setting JSON to false
	I0924 18:39:52.280046   20500 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1343,"bootTime":1727201849,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:39:52.280157   20500 start.go:139] virtualization: kvm guest
	I0924 18:39:52.282389   20500 out.go:177] * [functional-884668] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0924 18:39:52.283766   20500 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:39:52.283773   20500 notify.go:220] Checking for updates...
	I0924 18:39:52.286328   20500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:39:52.287492   20500 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 18:39:52.288698   20500 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 18:39:52.289876   20500 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 18:39:52.291067   20500 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:39:52.292707   20500 config.go:182] Loaded profile config "functional-884668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 18:39:52.293114   20500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:39:52.293156   20500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:39:52.307860   20500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46519
	I0924 18:39:52.308373   20500 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:39:52.309063   20500 main.go:141] libmachine: Using API Version  1
	I0924 18:39:52.309092   20500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:39:52.309490   20500 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:39:52.309670   20500 main.go:141] libmachine: (functional-884668) Calling .DriverName
	I0924 18:39:52.309922   20500 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:39:52.310334   20500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 18:39:52.310376   20500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 18:39:52.325053   20500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46503
	I0924 18:39:52.325510   20500 main.go:141] libmachine: () Calling .GetVersion
	I0924 18:39:52.326023   20500 main.go:141] libmachine: Using API Version  1
	I0924 18:39:52.326044   20500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 18:39:52.326412   20500 main.go:141] libmachine: () Calling .GetMachineName
	I0924 18:39:52.326610   20500 main.go:141] libmachine: (functional-884668) Calling .DriverName
	I0924 18:39:52.360421   20500 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0924 18:39:52.361637   20500 start.go:297] selected driver: kvm2
	I0924 18:39:52.361654   20500 start.go:901] validating driver "kvm2" against &{Name:functional-884668 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-884668 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.232 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:39:52.361794   20500 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:39:52.364095   20500 out.go:201] 
	W0924 18:39:52.365160   20500 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0924 18:39:52.366370   20500 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-884668 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-884668 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-m5kfz" [1a4c7f0c-6714-405a-8556-ffa02bc7f80d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-m5kfz" [1a4c7f0c-6714-405a-8556-ffa02bc7f80d] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.064917951s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.232:31919
functional_test.go:1675: http://192.168.39.232:31919: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-m5kfz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.232:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.232:31919
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7bd168ca-9d29-41cb-a4ed-268a3c1c7f57] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00472235s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-884668 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-884668 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-884668 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-884668 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c19f9861-911c-4ebb-a6c9-2c1cd4f6b3bc] Pending
helpers_test.go:344: "sp-pod" [c19f9861-911c-4ebb-a6c9-2c1cd4f6b3bc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c19f9861-911c-4ebb-a6c9-2c1cd4f6b3bc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.003276031s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-884668 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-884668 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-884668 delete -f testdata/storage-provisioner/pod.yaml: (2.928980445s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-884668 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [85a329d2-194c-447c-aa67-fb9c0ac430a6] Pending
helpers_test.go:344: "sp-pod" [85a329d2-194c-447c-aa67-fb9c0ac430a6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [85a329d2-194c-447c-aa67-fb9c0ac430a6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003154994s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-884668 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh -n functional-884668 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 cp functional-884668:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4013975958/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh -n functional-884668 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh -n functional-884668 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-884668 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-xw9qq" [14bb76b4-2cd5-4e37-bbeb-a29d7adfb437] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-xw9qq" [14bb76b4-2cd5-4e37-bbeb-a29d7adfb437] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.003425954s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-884668 exec mysql-6cdb49bbb-xw9qq -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-884668 exec mysql-6cdb49bbb-xw9qq -- mysql -ppassword -e "show databases;": exit status 1 (118.736971ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0924 18:40:33.170718   10949 retry.go:31] will retry after 1.029299061s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-884668 exec mysql-6cdb49bbb-xw9qq -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.52s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/10949/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "sudo cat /etc/test/nested/copy/10949/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/10949.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "sudo cat /etc/ssl/certs/10949.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/10949.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "sudo cat /usr/share/ca-certificates/10949.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/109492.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "sudo cat /etc/ssl/certs/109492.pem"
2024/09/24 18:40:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/109492.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "sudo cat /usr/share/ca-certificates/109492.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-884668 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884668 ssh "sudo systemctl is-active docker": exit status 1 (194.261382ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884668 ssh "sudo systemctl is-active containerd": exit status 1 (187.889537ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-884668 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-884668 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-s5cqd" [c08bfcc5-86ca-401f-9306-25d7f56114d7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-s5cqd" [c08bfcc5-86ca-401f-9306-25d7f56114d7] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003888065s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "288.655438ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "44.999458ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "359.46751ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.184548ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-884668 /tmp/TestFunctionalparallelMountCmdany-port371837656/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727203191045610131" to /tmp/TestFunctionalparallelMountCmdany-port371837656/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727203191045610131" to /tmp/TestFunctionalparallelMountCmdany-port371837656/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727203191045610131" to /tmp/TestFunctionalparallelMountCmdany-port371837656/001/test-1727203191045610131
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884668 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (256.340205ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0924 18:39:51.302457   10949 retry.go:31] will retry after 329.94814ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 24 18:39 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 24 18:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 24 18:39 test-1727203191045610131
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh cat /mount-9p/test-1727203191045610131
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-884668 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9e992a29-fd98-489c-a01f-f52484a8f48f] Pending
helpers_test.go:344: "busybox-mount" [9e992a29-fd98-489c-a01f-f52484a8f48f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9e992a29-fd98-489c-a01f-f52484a8f48f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9e992a29-fd98-489c-a01f-f52484a8f48f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004043415s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-884668 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-884668 /tmp/TestFunctionalparallelMountCmdany-port371837656/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-884668 /tmp/TestFunctionalparallelMountCmdspecific-port1723190660/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884668 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (215.964284ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0924 18:39:59.758521   10949 retry.go:31] will retry after 328.056715ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-884668 /tmp/TestFunctionalparallelMountCmdspecific-port1723190660/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884668 ssh "sudo umount -f /mount-9p": exit status 1 (245.066968ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-884668 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-884668 /tmp/TestFunctionalparallelMountCmdspecific-port1723190660/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 service list -o json
functional_test.go:1494: Took "476.563306ms" to run "out/minikube-linux-amd64 -p functional-884668 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.232:32559
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-884668 /tmp/TestFunctionalparallelMountCmdVerifyCleanup89750287/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-884668 /tmp/TestFunctionalparallelMountCmdVerifyCleanup89750287/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-884668 /tmp/TestFunctionalparallelMountCmdVerifyCleanup89750287/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884668 ssh "findmnt -T" /mount1: exit status 1 (328.508493ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0924 18:40:01.507891   10949 retry.go:31] will retry after 604.013661ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-884668 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-884668 /tmp/TestFunctionalparallelMountCmdVerifyCleanup89750287/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-884668 /tmp/TestFunctionalparallelMountCmdVerifyCleanup89750287/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-884668 /tmp/TestFunctionalparallelMountCmdVerifyCleanup89750287/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.232:32559
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-884668 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-884668
localhost/kicbase/echo-server:functional-884668
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-884668 image ls --format short --alsologtostderr:
I0924 18:40:21.781606   22473 out.go:345] Setting OutFile to fd 1 ...
I0924 18:40:21.781752   22473 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:40:21.781765   22473 out.go:358] Setting ErrFile to fd 2...
I0924 18:40:21.781771   22473 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:40:21.781994   22473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
I0924 18:40:21.782606   22473 config.go:182] Loaded profile config "functional-884668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0924 18:40:21.782719   22473 config.go:182] Loaded profile config "functional-884668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0924 18:40:21.783112   22473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0924 18:40:21.783163   22473 main.go:141] libmachine: Launching plugin server for driver kvm2
I0924 18:40:21.798416   22473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46093
I0924 18:40:21.798902   22473 main.go:141] libmachine: () Calling .GetVersion
I0924 18:40:21.799512   22473 main.go:141] libmachine: Using API Version  1
I0924 18:40:21.799535   22473 main.go:141] libmachine: () Calling .SetConfigRaw
I0924 18:40:21.799893   22473 main.go:141] libmachine: () Calling .GetMachineName
I0924 18:40:21.800096   22473 main.go:141] libmachine: (functional-884668) Calling .GetState
I0924 18:40:21.802108   22473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0924 18:40:21.802165   22473 main.go:141] libmachine: Launching plugin server for driver kvm2
I0924 18:40:21.821015   22473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43077
I0924 18:40:21.821464   22473 main.go:141] libmachine: () Calling .GetVersion
I0924 18:40:21.822013   22473 main.go:141] libmachine: Using API Version  1
I0924 18:40:21.822040   22473 main.go:141] libmachine: () Calling .SetConfigRaw
I0924 18:40:21.822378   22473 main.go:141] libmachine: () Calling .GetMachineName
I0924 18:40:21.822604   22473 main.go:141] libmachine: (functional-884668) Calling .DriverName
I0924 18:40:21.822809   22473 ssh_runner.go:195] Run: systemctl --version
I0924 18:40:21.822853   22473 main.go:141] libmachine: (functional-884668) Calling .GetSSHHostname
I0924 18:40:21.826486   22473 main.go:141] libmachine: (functional-884668) DBG | domain functional-884668 has defined MAC address 52:54:00:77:1b:a8 in network mk-functional-884668
I0924 18:40:21.826945   22473 main.go:141] libmachine: (functional-884668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:1b:a8", ip: ""} in network mk-functional-884668: {Iface:virbr1 ExpiryTime:2024-09-24 19:37:48 +0000 UTC Type:0 Mac:52:54:00:77:1b:a8 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:functional-884668 Clientid:01:52:54:00:77:1b:a8}
I0924 18:40:21.826976   22473 main.go:141] libmachine: (functional-884668) DBG | domain functional-884668 has defined IP address 192.168.39.232 and MAC address 52:54:00:77:1b:a8 in network mk-functional-884668
I0924 18:40:21.827130   22473 main.go:141] libmachine: (functional-884668) Calling .GetSSHPort
I0924 18:40:21.827316   22473 main.go:141] libmachine: (functional-884668) Calling .GetSSHKeyPath
I0924 18:40:21.827477   22473 main.go:141] libmachine: (functional-884668) Calling .GetSSHUsername
I0924 18:40:21.827642   22473 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/functional-884668/id_rsa Username:docker}
I0924 18:40:21.954114   22473 ssh_runner.go:195] Run: sudo crictl images --output json
I0924 18:40:22.075639   22473 main.go:141] libmachine: Making call to close driver server
I0924 18:40:22.075656   22473 main.go:141] libmachine: (functional-884668) Calling .Close
I0924 18:40:22.076027   22473 main.go:141] libmachine: (functional-884668) DBG | Closing plugin on server side
I0924 18:40:22.076035   22473 main.go:141] libmachine: Successfully made call to close driver server
I0924 18:40:22.076068   22473 main.go:141] libmachine: Making call to close connection to plugin binary
I0924 18:40:22.076084   22473 main.go:141] libmachine: Making call to close driver server
I0924 18:40:22.076095   22473 main.go:141] libmachine: (functional-884668) Calling .Close
I0924 18:40:22.076341   22473 main.go:141] libmachine: (functional-884668) DBG | Closing plugin on server side
I0924 18:40:22.076352   22473 main.go:141] libmachine: Successfully made call to close driver server
I0924 18:40:22.076373   22473 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-884668 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| localhost/kicbase/echo-server           | functional-884668  | 9056ab77afb8e | 4.94MB |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/nginx                 | alpine             | c7b4f26a7d93f | 44.6MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-884668  | 1ee642a9b8aa1 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-884668 image ls --format table --alsologtostderr:
I0924 18:40:22.679377   22595 out.go:345] Setting OutFile to fd 1 ...
I0924 18:40:22.679740   22595 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:40:22.679758   22595 out.go:358] Setting ErrFile to fd 2...
I0924 18:40:22.679766   22595 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:40:22.680458   22595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
I0924 18:40:22.681391   22595 config.go:182] Loaded profile config "functional-884668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0924 18:40:22.681550   22595 config.go:182] Loaded profile config "functional-884668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0924 18:40:22.682264   22595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0924 18:40:22.682325   22595 main.go:141] libmachine: Launching plugin server for driver kvm2
I0924 18:40:22.697329   22595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
I0924 18:40:22.697836   22595 main.go:141] libmachine: () Calling .GetVersion
I0924 18:40:22.698390   22595 main.go:141] libmachine: Using API Version  1
I0924 18:40:22.698411   22595 main.go:141] libmachine: () Calling .SetConfigRaw
I0924 18:40:22.698742   22595 main.go:141] libmachine: () Calling .GetMachineName
I0924 18:40:22.698944   22595 main.go:141] libmachine: (functional-884668) Calling .GetState
I0924 18:40:22.700744   22595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0924 18:40:22.700812   22595 main.go:141] libmachine: Launching plugin server for driver kvm2
I0924 18:40:22.716800   22595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33567
I0924 18:40:22.717322   22595 main.go:141] libmachine: () Calling .GetVersion
I0924 18:40:22.717926   22595 main.go:141] libmachine: Using API Version  1
I0924 18:40:22.717955   22595 main.go:141] libmachine: () Calling .SetConfigRaw
I0924 18:40:22.718282   22595 main.go:141] libmachine: () Calling .GetMachineName
I0924 18:40:22.718454   22595 main.go:141] libmachine: (functional-884668) Calling .DriverName
I0924 18:40:22.718713   22595 ssh_runner.go:195] Run: systemctl --version
I0924 18:40:22.718747   22595 main.go:141] libmachine: (functional-884668) Calling .GetSSHHostname
I0924 18:40:22.721482   22595 main.go:141] libmachine: (functional-884668) DBG | domain functional-884668 has defined MAC address 52:54:00:77:1b:a8 in network mk-functional-884668
I0924 18:40:22.721928   22595 main.go:141] libmachine: (functional-884668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:1b:a8", ip: ""} in network mk-functional-884668: {Iface:virbr1 ExpiryTime:2024-09-24 19:37:48 +0000 UTC Type:0 Mac:52:54:00:77:1b:a8 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:functional-884668 Clientid:01:52:54:00:77:1b:a8}
I0924 18:40:22.721956   22595 main.go:141] libmachine: (functional-884668) DBG | domain functional-884668 has defined IP address 192.168.39.232 and MAC address 52:54:00:77:1b:a8 in network mk-functional-884668
I0924 18:40:22.722097   22595 main.go:141] libmachine: (functional-884668) Calling .GetSSHPort
I0924 18:40:22.722258   22595 main.go:141] libmachine: (functional-884668) Calling .GetSSHKeyPath
I0924 18:40:22.722388   22595 main.go:141] libmachine: (functional-884668) Calling .GetSSHUsername
I0924 18:40:22.722518   22595 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/functional-884668/id_rsa Username:docker}
I0924 18:40:22.830233   22595 ssh_runner.go:195] Run: sudo crictl images --output json
I0924 18:40:22.901485   22595 main.go:141] libmachine: Making call to close driver server
I0924 18:40:22.901499   22595 main.go:141] libmachine: (functional-884668) Calling .Close
I0924 18:40:22.901791   22595 main.go:141] libmachine: (functional-884668) DBG | Closing plugin on server side
I0924 18:40:22.901793   22595 main.go:141] libmachine: Successfully made call to close driver server
I0924 18:40:22.901826   22595 main.go:141] libmachine: Making call to close connection to plugin binary
I0924 18:40:22.901841   22595 main.go:141] libmachine: Making call to close driver server
I0924 18:40:22.901851   22595 main.go:141] libmachine: (functional-884668) Calling .Close
I0924 18:40:22.902076   22595 main.go:141] libmachine: Successfully made call to close driver server
I0924 18:40:22.902093   22595 main.go:141] libmachine: Making call to close connection to plugin binary
I0924 18:40:22.902096   22595 main.go:141] libmachine: (functional-884668) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-884668 image ls --format json --alsologtostderr:
[{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":["docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56","docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44647101"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac2
6864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-co
ntroller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id"
:"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigest
s":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["reg
istry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328
d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-884668"],"size":"4943877"},{"id":"1ee642a9b8aa1172c583fc7601d6deebdfecae05af5bdf2e1b3418015cd834fd","repoDigests":["localhost/minikube-local-cache-test@sha256:6bdd57359d4ba1630f47caaf3fe6bcf92ac254b9656250ef5efd305673052c0f"],"repoTags":["localhost/minikube-local-cache-test:functional-884668"],"size":"3330"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-884668 image ls --format json --alsologtostderr:
I0924 18:40:22.368042   22555 out.go:345] Setting OutFile to fd 1 ...
I0924 18:40:22.368287   22555 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:40:22.368296   22555 out.go:358] Setting ErrFile to fd 2...
I0924 18:40:22.368307   22555 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:40:22.368554   22555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
I0924 18:40:22.369290   22555 config.go:182] Loaded profile config "functional-884668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0924 18:40:22.369427   22555 config.go:182] Loaded profile config "functional-884668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0924 18:40:22.369930   22555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0924 18:40:22.369978   22555 main.go:141] libmachine: Launching plugin server for driver kvm2
I0924 18:40:22.384443   22555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35119
I0924 18:40:22.384923   22555 main.go:141] libmachine: () Calling .GetVersion
I0924 18:40:22.385425   22555 main.go:141] libmachine: Using API Version  1
I0924 18:40:22.385448   22555 main.go:141] libmachine: () Calling .SetConfigRaw
I0924 18:40:22.385805   22555 main.go:141] libmachine: () Calling .GetMachineName
I0924 18:40:22.386012   22555 main.go:141] libmachine: (functional-884668) Calling .GetState
I0924 18:40:22.387960   22555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0924 18:40:22.388040   22555 main.go:141] libmachine: Launching plugin server for driver kvm2
I0924 18:40:22.403529   22555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45097
I0924 18:40:22.405698   22555 main.go:141] libmachine: () Calling .GetVersion
I0924 18:40:22.406293   22555 main.go:141] libmachine: Using API Version  1
I0924 18:40:22.406314   22555 main.go:141] libmachine: () Calling .SetConfigRaw
I0924 18:40:22.406747   22555 main.go:141] libmachine: () Calling .GetMachineName
I0924 18:40:22.406944   22555 main.go:141] libmachine: (functional-884668) Calling .DriverName
I0924 18:40:22.407161   22555 ssh_runner.go:195] Run: systemctl --version
I0924 18:40:22.407185   22555 main.go:141] libmachine: (functional-884668) Calling .GetSSHHostname
I0924 18:40:22.410027   22555 main.go:141] libmachine: (functional-884668) DBG | domain functional-884668 has defined MAC address 52:54:00:77:1b:a8 in network mk-functional-884668
I0924 18:40:22.410476   22555 main.go:141] libmachine: (functional-884668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:1b:a8", ip: ""} in network mk-functional-884668: {Iface:virbr1 ExpiryTime:2024-09-24 19:37:48 +0000 UTC Type:0 Mac:52:54:00:77:1b:a8 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:functional-884668 Clientid:01:52:54:00:77:1b:a8}
I0924 18:40:22.410496   22555 main.go:141] libmachine: (functional-884668) DBG | domain functional-884668 has defined IP address 192.168.39.232 and MAC address 52:54:00:77:1b:a8 in network mk-functional-884668
I0924 18:40:22.410664   22555 main.go:141] libmachine: (functional-884668) Calling .GetSSHPort
I0924 18:40:22.410856   22555 main.go:141] libmachine: (functional-884668) Calling .GetSSHKeyPath
I0924 18:40:22.410988   22555 main.go:141] libmachine: (functional-884668) Calling .GetSSHUsername
I0924 18:40:22.411132   22555 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/functional-884668/id_rsa Username:docker}
I0924 18:40:22.564480   22555 ssh_runner.go:195] Run: sudo crictl images --output json
I0924 18:40:22.631811   22555 main.go:141] libmachine: Making call to close driver server
I0924 18:40:22.631834   22555 main.go:141] libmachine: (functional-884668) Calling .Close
I0924 18:40:22.632130   22555 main.go:141] libmachine: Successfully made call to close driver server
I0924 18:40:22.632147   22555 main.go:141] libmachine: Making call to close connection to plugin binary
I0924 18:40:22.632155   22555 main.go:141] libmachine: Making call to close driver server
I0924 18:40:22.632155   22555 main.go:141] libmachine: (functional-884668) DBG | Closing plugin on server side
I0924 18:40:22.632163   22555 main.go:141] libmachine: (functional-884668) Calling .Close
I0924 18:40:22.633297   22555 main.go:141] libmachine: Successfully made call to close driver server
I0924 18:40:22.633312   22555 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-884668 image ls --format yaml --alsologtostderr:
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-884668
size: "4943877"
- id: 1ee642a9b8aa1172c583fc7601d6deebdfecae05af5bdf2e1b3418015cd834fd
repoDigests:
- localhost/minikube-local-cache-test@sha256:6bdd57359d4ba1630f47caaf3fe6bcf92ac254b9656250ef5efd305673052c0f
repoTags:
- localhost/minikube-local-cache-test:functional-884668
size: "3330"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests:
- docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "44647101"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-884668 image ls --format yaml --alsologtostderr:
I0924 18:40:22.127127   22497 out.go:345] Setting OutFile to fd 1 ...
I0924 18:40:22.127228   22497 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:40:22.127238   22497 out.go:358] Setting ErrFile to fd 2...
I0924 18:40:22.127242   22497 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:40:22.127454   22497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
I0924 18:40:22.128040   22497 config.go:182] Loaded profile config "functional-884668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0924 18:40:22.128152   22497 config.go:182] Loaded profile config "functional-884668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0924 18:40:22.128506   22497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0924 18:40:22.128546   22497 main.go:141] libmachine: Launching plugin server for driver kvm2
I0924 18:40:22.144907   22497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42453
I0924 18:40:22.145406   22497 main.go:141] libmachine: () Calling .GetVersion
I0924 18:40:22.146075   22497 main.go:141] libmachine: Using API Version  1
I0924 18:40:22.146106   22497 main.go:141] libmachine: () Calling .SetConfigRaw
I0924 18:40:22.146413   22497 main.go:141] libmachine: () Calling .GetMachineName
I0924 18:40:22.146616   22497 main.go:141] libmachine: (functional-884668) Calling .GetState
I0924 18:40:22.148459   22497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0924 18:40:22.148507   22497 main.go:141] libmachine: Launching plugin server for driver kvm2
I0924 18:40:22.163229   22497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41413
I0924 18:40:22.163569   22497 main.go:141] libmachine: () Calling .GetVersion
I0924 18:40:22.164006   22497 main.go:141] libmachine: Using API Version  1
I0924 18:40:22.164027   22497 main.go:141] libmachine: () Calling .SetConfigRaw
I0924 18:40:22.164357   22497 main.go:141] libmachine: () Calling .GetMachineName
I0924 18:40:22.164535   22497 main.go:141] libmachine: (functional-884668) Calling .DriverName
I0924 18:40:22.164711   22497 ssh_runner.go:195] Run: systemctl --version
I0924 18:40:22.164739   22497 main.go:141] libmachine: (functional-884668) Calling .GetSSHHostname
I0924 18:40:22.167237   22497 main.go:141] libmachine: (functional-884668) DBG | domain functional-884668 has defined MAC address 52:54:00:77:1b:a8 in network mk-functional-884668
I0924 18:40:22.167588   22497 main.go:141] libmachine: (functional-884668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:1b:a8", ip: ""} in network mk-functional-884668: {Iface:virbr1 ExpiryTime:2024-09-24 19:37:48 +0000 UTC Type:0 Mac:52:54:00:77:1b:a8 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:functional-884668 Clientid:01:52:54:00:77:1b:a8}
I0924 18:40:22.167616   22497 main.go:141] libmachine: (functional-884668) DBG | domain functional-884668 has defined IP address 192.168.39.232 and MAC address 52:54:00:77:1b:a8 in network mk-functional-884668
I0924 18:40:22.167767   22497 main.go:141] libmachine: (functional-884668) Calling .GetSSHPort
I0924 18:40:22.167943   22497 main.go:141] libmachine: (functional-884668) Calling .GetSSHKeyPath
I0924 18:40:22.168063   22497 main.go:141] libmachine: (functional-884668) Calling .GetSSHUsername
I0924 18:40:22.168183   22497 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/functional-884668/id_rsa Username:docker}
I0924 18:40:22.264179   22497 ssh_runner.go:195] Run: sudo crictl images --output json
I0924 18:40:22.315575   22497 main.go:141] libmachine: Making call to close driver server
I0924 18:40:22.315590   22497 main.go:141] libmachine: (functional-884668) Calling .Close
I0924 18:40:22.315931   22497 main.go:141] libmachine: Successfully made call to close driver server
I0924 18:40:22.315947   22497 main.go:141] libmachine: Making call to close connection to plugin binary
I0924 18:40:22.315961   22497 main.go:141] libmachine: Making call to close driver server
I0924 18:40:22.315968   22497 main.go:141] libmachine: (functional-884668) Calling .Close
I0924 18:40:22.316207   22497 main.go:141] libmachine: Successfully made call to close driver server
I0924 18:40:22.316221   22497 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884668 ssh pgrep buildkitd: exit status 1 (217.592054ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image build -t localhost/my-image:functional-884668 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-884668 image build -t localhost/my-image:functional-884668 testdata/build --alsologtostderr: (6.106340368s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-884668 image build -t localhost/my-image:functional-884668 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d68d6a5e3ef
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-884668
--> 4dae227c2d7
Successfully tagged localhost/my-image:functional-884668
4dae227c2d75a3762314d67689ef54d09d054408445403e609d1d2ca6a2725d9
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-884668 image build -t localhost/my-image:functional-884668 testdata/build --alsologtostderr:
I0924 18:40:22.366049   22549 out.go:345] Setting OutFile to fd 1 ...
I0924 18:40:22.366300   22549 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:40:22.366332   22549 out.go:358] Setting ErrFile to fd 2...
I0924 18:40:22.366348   22549 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 18:40:22.366661   22549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
I0924 18:40:22.367370   22549 config.go:182] Loaded profile config "functional-884668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0924 18:40:22.367848   22549 config.go:182] Loaded profile config "functional-884668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0924 18:40:22.368176   22549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0924 18:40:22.368235   22549 main.go:141] libmachine: Launching plugin server for driver kvm2
I0924 18:40:22.383190   22549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38327
I0924 18:40:22.383753   22549 main.go:141] libmachine: () Calling .GetVersion
I0924 18:40:22.384431   22549 main.go:141] libmachine: Using API Version  1
I0924 18:40:22.384463   22549 main.go:141] libmachine: () Calling .SetConfigRaw
I0924 18:40:22.384910   22549 main.go:141] libmachine: () Calling .GetMachineName
I0924 18:40:22.385112   22549 main.go:141] libmachine: (functional-884668) Calling .GetState
I0924 18:40:22.387318   22549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0924 18:40:22.387363   22549 main.go:141] libmachine: Launching plugin server for driver kvm2
I0924 18:40:22.403635   22549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39341
I0924 18:40:22.404176   22549 main.go:141] libmachine: () Calling .GetVersion
I0924 18:40:22.404721   22549 main.go:141] libmachine: Using API Version  1
I0924 18:40:22.404747   22549 main.go:141] libmachine: () Calling .SetConfigRaw
I0924 18:40:22.405138   22549 main.go:141] libmachine: () Calling .GetMachineName
I0924 18:40:22.405319   22549 main.go:141] libmachine: (functional-884668) Calling .DriverName
I0924 18:40:22.405466   22549 ssh_runner.go:195] Run: systemctl --version
I0924 18:40:22.405499   22549 main.go:141] libmachine: (functional-884668) Calling .GetSSHHostname
I0924 18:40:22.408386   22549 main.go:141] libmachine: (functional-884668) DBG | domain functional-884668 has defined MAC address 52:54:00:77:1b:a8 in network mk-functional-884668
I0924 18:40:22.409022   22549 main.go:141] libmachine: (functional-884668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:1b:a8", ip: ""} in network mk-functional-884668: {Iface:virbr1 ExpiryTime:2024-09-24 19:37:48 +0000 UTC Type:0 Mac:52:54:00:77:1b:a8 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:functional-884668 Clientid:01:52:54:00:77:1b:a8}
I0924 18:40:22.409059   22549 main.go:141] libmachine: (functional-884668) DBG | domain functional-884668 has defined IP address 192.168.39.232 and MAC address 52:54:00:77:1b:a8 in network mk-functional-884668
I0924 18:40:22.409211   22549 main.go:141] libmachine: (functional-884668) Calling .GetSSHPort
I0924 18:40:22.409372   22549 main.go:141] libmachine: (functional-884668) Calling .GetSSHKeyPath
I0924 18:40:22.409616   22549 main.go:141] libmachine: (functional-884668) Calling .GetSSHUsername
I0924 18:40:22.409786   22549 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/functional-884668/id_rsa Username:docker}
I0924 18:40:22.573944   22549 build_images.go:161] Building image from path: /tmp/build.3616678460.tar
I0924 18:40:22.574003   22549 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0924 18:40:22.614737   22549 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3616678460.tar
I0924 18:40:22.634813   22549 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3616678460.tar: stat -c "%s %y" /var/lib/minikube/build/build.3616678460.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3616678460.tar': No such file or directory
I0924 18:40:22.634859   22549 ssh_runner.go:362] scp /tmp/build.3616678460.tar --> /var/lib/minikube/build/build.3616678460.tar (3072 bytes)
I0924 18:40:22.692886   22549 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3616678460
I0924 18:40:22.729261   22549 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3616678460 -xf /var/lib/minikube/build/build.3616678460.tar
I0924 18:40:22.755481   22549 crio.go:315] Building image: /var/lib/minikube/build/build.3616678460
I0924 18:40:22.755556   22549 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-884668 /var/lib/minikube/build/build.3616678460 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0924 18:40:28.397173   22549 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-884668 /var/lib/minikube/build/build.3616678460 --cgroup-manager=cgroupfs: (5.641589876s)
I0924 18:40:28.397257   22549 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3616678460
I0924 18:40:28.409992   22549 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3616678460.tar
I0924 18:40:28.418423   22549 build_images.go:217] Built localhost/my-image:functional-884668 from /tmp/build.3616678460.tar
I0924 18:40:28.418463   22549 build_images.go:133] succeeded building to: functional-884668
I0924 18:40:28.418470   22549 build_images.go:134] failed building to: 
I0924 18:40:28.418496   22549 main.go:141] libmachine: Making call to close driver server
I0924 18:40:28.418512   22549 main.go:141] libmachine: (functional-884668) Calling .Close
I0924 18:40:28.418871   22549 main.go:141] libmachine: Successfully made call to close driver server
I0924 18:40:28.418887   22549 main.go:141] libmachine: Making call to close connection to plugin binary
I0924 18:40:28.418895   22549 main.go:141] libmachine: Making call to close driver server
I0924 18:40:28.418902   22549 main.go:141] libmachine: (functional-884668) Calling .Close
I0924 18:40:28.419099   22549 main.go:141] libmachine: Successfully made call to close driver server
I0924 18:40:28.419117   22549 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.513319344s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-884668
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image load --daemon kicbase/echo-server:functional-884668 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-884668 image load --daemon kicbase/echo-server:functional-884668 --alsologtostderr: (1.811511201s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-884668 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-884668 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-884668 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 21745: os: process already finished
helpers_test.go:502: unable to terminate pid 21764: os: process already finished
helpers_test.go:502: unable to terminate pid 21786: os: process already finished
helpers_test.go:508: unable to kill pid 21711: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-884668 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-884668 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-884668 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5787cba7-72fe-43ee-8da1-7bc10a66c7cd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5787cba7-72fe-43ee-8da1-7bc10a66c7cd] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.003086104s
I0924 18:40:21.225963   10949 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image load --daemon kicbase/echo-server:functional-884668 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-884668
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image load --daemon kicbase/echo-server:functional-884668 --alsologtostderr
E0924 18:40:08.126811   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-884668 image load --daemon kicbase/echo-server:functional-884668 --alsologtostderr: (2.700662657s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image save kicbase/echo-server:functional-884668 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image rm kicbase/echo-server:functional-884668 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-884668
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-884668 image save --daemon kicbase/echo-server:functional-884668 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-884668 image save --daemon kicbase/echo-server:functional-884668 --alsologtostderr: (1.319380646s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-884668
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-884668 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.93.248 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-884668 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-884668
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-884668
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-884668
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (194.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-685475 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0924 18:42:24.267323   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:42:51.969098   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-685475 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m13.724259758s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (194.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-685475 -- rollout status deployment/busybox: (3.757523381s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- exec busybox-7dff88458-gksmx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- exec busybox-7dff88458-hmkfk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- exec busybox-7dff88458-w6g8l -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- exec busybox-7dff88458-gksmx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- exec busybox-7dff88458-hmkfk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- exec busybox-7dff88458-w6g8l -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- exec busybox-7dff88458-gksmx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- exec busybox-7dff88458-hmkfk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- exec busybox-7dff88458-w6g8l -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- exec busybox-7dff88458-gksmx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- exec busybox-7dff88458-gksmx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- exec busybox-7dff88458-hmkfk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- exec busybox-7dff88458-hmkfk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- exec busybox-7dff88458-w6g8l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-685475 -- exec busybox-7dff88458-w6g8l -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-685475 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-685475 -v=7 --alsologtostderr: (51.581022395s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-685475 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0924 18:44:49.790332   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:44:49.796752   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:44:49.808117   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:44:49.829488   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:44:49.870885   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:44:49.952351   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 status --output json -v=7 --alsologtostderr
E0924 18:44:50.113932   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:44:50.435458   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp testdata/cp-test.txt ha-685475:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475 "sudo cat /home/docker/cp-test.txt"
E0924 18:44:51.077206   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp ha-685475:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile399016322/001/cp-test_ha-685475.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp ha-685475:/home/docker/cp-test.txt ha-685475-m02:/home/docker/cp-test_ha-685475_ha-685475-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m02 "sudo cat /home/docker/cp-test_ha-685475_ha-685475-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp ha-685475:/home/docker/cp-test.txt ha-685475-m03:/home/docker/cp-test_ha-685475_ha-685475-m03.txt
E0924 18:44:52.359458   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m03 "sudo cat /home/docker/cp-test_ha-685475_ha-685475-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp ha-685475:/home/docker/cp-test.txt ha-685475-m04:/home/docker/cp-test_ha-685475_ha-685475-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m04 "sudo cat /home/docker/cp-test_ha-685475_ha-685475-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp testdata/cp-test.txt ha-685475-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp ha-685475-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile399016322/001/cp-test_ha-685475-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp ha-685475-m02:/home/docker/cp-test.txt ha-685475:/home/docker/cp-test_ha-685475-m02_ha-685475.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m02 "sudo cat /home/docker/cp-test.txt"
E0924 18:44:54.921120   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475 "sudo cat /home/docker/cp-test_ha-685475-m02_ha-685475.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp ha-685475-m02:/home/docker/cp-test.txt ha-685475-m03:/home/docker/cp-test_ha-685475-m02_ha-685475-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m03 "sudo cat /home/docker/cp-test_ha-685475-m02_ha-685475-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp ha-685475-m02:/home/docker/cp-test.txt ha-685475-m04:/home/docker/cp-test_ha-685475-m02_ha-685475-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m04 "sudo cat /home/docker/cp-test_ha-685475-m02_ha-685475-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp testdata/cp-test.txt ha-685475-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile399016322/001/cp-test_ha-685475-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt ha-685475:/home/docker/cp-test_ha-685475-m03_ha-685475.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475 "sudo cat /home/docker/cp-test_ha-685475-m03_ha-685475.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt ha-685475-m02:/home/docker/cp-test_ha-685475-m03_ha-685475-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m02 "sudo cat /home/docker/cp-test_ha-685475-m03_ha-685475-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp ha-685475-m03:/home/docker/cp-test.txt ha-685475-m04:/home/docker/cp-test_ha-685475-m03_ha-685475-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m04 "sudo cat /home/docker/cp-test_ha-685475-m03_ha-685475-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp testdata/cp-test.txt ha-685475-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile399016322/001/cp-test_ha-685475-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m04 "sudo cat /home/docker/cp-test.txt"
E0924 18:45:00.043429   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt ha-685475:/home/docker/cp-test_ha-685475-m04_ha-685475.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475 "sudo cat /home/docker/cp-test_ha-685475-m04_ha-685475.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt ha-685475-m02:/home/docker/cp-test_ha-685475-m04_ha-685475-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m02 "sudo cat /home/docker/cp-test_ha-685475-m04_ha-685475-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 cp ha-685475-m04:/home/docker/cp-test.txt ha-685475-m03:/home/docker/cp-test_ha-685475-m04_ha-685475-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 ssh -n ha-685475-m03 "sudo cat /home/docker/cp-test_ha-685475-m04_ha-685475-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.180907049s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-685475 node delete m03 -v=7 --alsologtostderr: (15.75886889s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (347.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-685475 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0924 18:57:24.268760   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
E0924 18:59:49.791431   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:01:12.856046   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-685475 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m46.495593001s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (347.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-685475 --control-plane -v=7 --alsologtostderr
E0924 19:02:24.266698   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-685475 --control-plane -v=7 --alsologtostderr: (1m14.913108333s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-685475 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.98s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-253333 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-253333 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (47.976763368s)
--- PASS: TestJSONOutput/start/Command (47.98s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-253333 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-253333 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.61s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-253333 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-253333 --output=json --user=testUser: (6.611875027s)
--- PASS: TestJSONOutput/stop/Command (6.61s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-925951 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-925951 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.095603ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"482945d9-d52c-42d7-9fc6-58b414b625a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-925951] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eaa1fbf0-a2c9-4831-8b12-846885304d91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19700"}}
	{"specversion":"1.0","id":"a421b801-f773-42b1-9c68-483f293bc82b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"33ea6e45-de78-4db4-a367-db7ef64f0dd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig"}}
	{"specversion":"1.0","id":"46f06e8a-3d55-49f9-a444-0ce88f2c185f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube"}}
	{"specversion":"1.0","id":"a1e18fa3-7929-4fe9-8216-9819f446f444","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"dae3d603-d944-4e33-969a-2c4c24153c32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"825c8cfe-39d7-48d7-88f9-0bab0a5b68f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-925951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-925951
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (84.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-470593 --driver=kvm2  --container-runtime=crio
E0924 19:04:49.794675   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-470593 --driver=kvm2  --container-runtime=crio: (39.839779535s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-481287 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-481287 --driver=kvm2  --container-runtime=crio: (41.379625447s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-470593
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-481287
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-481287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-481287
helpers_test.go:175: Cleaning up "first-470593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-470593
--- PASS: TestMinikubeProfile (84.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-655949 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-655949 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.804711088s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-655949 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-655949 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-670775 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-670775 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.509086409s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670775 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670775 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-655949 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670775 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670775 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-670775
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-670775: (1.277505259s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.66s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-670775
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-670775: (21.655353027s)
--- PASS: TestMountStart/serial/RestartStopped (22.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670775 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670775 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (133.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-624105 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0924 19:07:24.266285   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-624105 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m12.82883728s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (133.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-624105 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-624105 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-624105 -- rollout status deployment/busybox: (3.575787611s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-624105 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-624105 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-624105 -- exec busybox-7dff88458-b22dm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-624105 -- exec busybox-7dff88458-z8dzp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-624105 -- exec busybox-7dff88458-b22dm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-624105 -- exec busybox-7dff88458-z8dzp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-624105 -- exec busybox-7dff88458-b22dm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-624105 -- exec busybox-7dff88458-z8dzp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-624105 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-624105 -- exec busybox-7dff88458-b22dm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-624105 -- exec busybox-7dff88458-b22dm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-624105 -- exec busybox-7dff88458-z8dzp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-624105 -- exec busybox-7dff88458-z8dzp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-624105 -v 3 --alsologtostderr
E0924 19:09:49.790472   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-624105 -v 3 --alsologtostderr: (47.229973798s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.77s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-624105 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0924 19:10:27.332738   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 cp testdata/cp-test.txt multinode-624105:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 cp multinode-624105:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3111081610/001/cp-test_multinode-624105.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 cp multinode-624105:/home/docker/cp-test.txt multinode-624105-m02:/home/docker/cp-test_multinode-624105_multinode-624105-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105-m02 "sudo cat /home/docker/cp-test_multinode-624105_multinode-624105-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 cp multinode-624105:/home/docker/cp-test.txt multinode-624105-m03:/home/docker/cp-test_multinode-624105_multinode-624105-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105-m03 "sudo cat /home/docker/cp-test_multinode-624105_multinode-624105-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 cp testdata/cp-test.txt multinode-624105-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 cp multinode-624105-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3111081610/001/cp-test_multinode-624105-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 cp multinode-624105-m02:/home/docker/cp-test.txt multinode-624105:/home/docker/cp-test_multinode-624105-m02_multinode-624105.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105 "sudo cat /home/docker/cp-test_multinode-624105-m02_multinode-624105.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 cp multinode-624105-m02:/home/docker/cp-test.txt multinode-624105-m03:/home/docker/cp-test_multinode-624105-m02_multinode-624105-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105-m03 "sudo cat /home/docker/cp-test_multinode-624105-m02_multinode-624105-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 cp testdata/cp-test.txt multinode-624105-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 cp multinode-624105-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3111081610/001/cp-test_multinode-624105-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 cp multinode-624105-m03:/home/docker/cp-test.txt multinode-624105:/home/docker/cp-test_multinode-624105-m03_multinode-624105.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105 "sudo cat /home/docker/cp-test_multinode-624105-m03_multinode-624105.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 cp multinode-624105-m03:/home/docker/cp-test.txt multinode-624105-m02:/home/docker/cp-test_multinode-624105-m03_multinode-624105-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 ssh -n multinode-624105-m02 "sudo cat /home/docker/cp-test_multinode-624105-m03_multinode-624105-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-624105 node stop m03: (1.350609335s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-624105 status: exit status 7 (407.102943ms)

                                                
                                                
-- stdout --
	multinode-624105
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-624105-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-624105-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-624105 status --alsologtostderr: exit status 7 (406.793801ms)

                                                
                                                
-- stdout --
	multinode-624105
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-624105-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-624105-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 19:10:36.111053   39454 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:10:36.111283   39454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:10:36.111292   39454 out.go:358] Setting ErrFile to fd 2...
	I0924 19:10:36.111297   39454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:10:36.111446   39454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:10:36.111594   39454 out.go:352] Setting JSON to false
	I0924 19:10:36.111625   39454 mustload.go:65] Loading cluster: multinode-624105
	I0924 19:10:36.111715   39454 notify.go:220] Checking for updates...
	I0924 19:10:36.111994   39454 config.go:182] Loaded profile config "multinode-624105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:10:36.112010   39454 status.go:174] checking status of multinode-624105 ...
	I0924 19:10:36.112468   39454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:10:36.112516   39454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:10:36.131440   39454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I0924 19:10:36.131890   39454 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:10:36.132458   39454 main.go:141] libmachine: Using API Version  1
	I0924 19:10:36.132482   39454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:10:36.132824   39454 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:10:36.132973   39454 main.go:141] libmachine: (multinode-624105) Calling .GetState
	I0924 19:10:36.134725   39454 status.go:364] multinode-624105 host status = "Running" (err=<nil>)
	I0924 19:10:36.134741   39454 host.go:66] Checking if "multinode-624105" exists ...
	I0924 19:10:36.135160   39454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:10:36.135207   39454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:10:36.149905   39454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33411
	I0924 19:10:36.150354   39454 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:10:36.150916   39454 main.go:141] libmachine: Using API Version  1
	I0924 19:10:36.150959   39454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:10:36.151283   39454 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:10:36.151479   39454 main.go:141] libmachine: (multinode-624105) Calling .GetIP
	I0924 19:10:36.154357   39454 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:10:36.154760   39454 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:10:36.154798   39454 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:10:36.154958   39454 host.go:66] Checking if "multinode-624105" exists ...
	I0924 19:10:36.155339   39454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:10:36.155380   39454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:10:36.170606   39454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38861
	I0924 19:10:36.171033   39454 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:10:36.171460   39454 main.go:141] libmachine: Using API Version  1
	I0924 19:10:36.171480   39454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:10:36.171779   39454 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:10:36.171951   39454 main.go:141] libmachine: (multinode-624105) Calling .DriverName
	I0924 19:10:36.172118   39454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 19:10:36.172149   39454 main.go:141] libmachine: (multinode-624105) Calling .GetSSHHostname
	I0924 19:10:36.174731   39454 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:10:36.175138   39454 main.go:141] libmachine: (multinode-624105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:0e:e0", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:07:33 +0000 UTC Type:0 Mac:52:54:00:e4:0e:e0 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-624105 Clientid:01:52:54:00:e4:0e:e0}
	I0924 19:10:36.175170   39454 main.go:141] libmachine: (multinode-624105) DBG | domain multinode-624105 has defined IP address 192.168.39.206 and MAC address 52:54:00:e4:0e:e0 in network mk-multinode-624105
	I0924 19:10:36.175291   39454 main.go:141] libmachine: (multinode-624105) Calling .GetSSHPort
	I0924 19:10:36.175457   39454 main.go:141] libmachine: (multinode-624105) Calling .GetSSHKeyPath
	I0924 19:10:36.175583   39454 main.go:141] libmachine: (multinode-624105) Calling .GetSSHUsername
	I0924 19:10:36.175681   39454 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/multinode-624105/id_rsa Username:docker}
	I0924 19:10:36.253521   39454 ssh_runner.go:195] Run: systemctl --version
	I0924 19:10:36.259163   39454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:10:36.272486   39454 kubeconfig.go:125] found "multinode-624105" server: "https://192.168.39.206:8443"
	I0924 19:10:36.272515   39454 api_server.go:166] Checking apiserver status ...
	I0924 19:10:36.272546   39454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 19:10:36.284877   39454 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1077/cgroup
	W0924 19:10:36.293563   39454 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1077/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0924 19:10:36.293611   39454 ssh_runner.go:195] Run: ls
	I0924 19:10:36.297619   39454 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I0924 19:10:36.301764   39454 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I0924 19:10:36.301783   39454 status.go:456] multinode-624105 apiserver status = Running (err=<nil>)
	I0924 19:10:36.301792   39454 status.go:176] multinode-624105 status: &{Name:multinode-624105 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 19:10:36.301810   39454 status.go:174] checking status of multinode-624105-m02 ...
	I0924 19:10:36.302130   39454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:10:36.302167   39454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:10:36.317906   39454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33499
	I0924 19:10:36.318334   39454 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:10:36.318811   39454 main.go:141] libmachine: Using API Version  1
	I0924 19:10:36.318839   39454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:10:36.319187   39454 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:10:36.319380   39454 main.go:141] libmachine: (multinode-624105-m02) Calling .GetState
	I0924 19:10:36.320911   39454 status.go:364] multinode-624105-m02 host status = "Running" (err=<nil>)
	I0924 19:10:36.320928   39454 host.go:66] Checking if "multinode-624105-m02" exists ...
	I0924 19:10:36.321266   39454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:10:36.321306   39454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:10:36.335989   39454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I0924 19:10:36.336438   39454 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:10:36.336931   39454 main.go:141] libmachine: Using API Version  1
	I0924 19:10:36.336958   39454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:10:36.337255   39454 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:10:36.337416   39454 main.go:141] libmachine: (multinode-624105-m02) Calling .GetIP
	I0924 19:10:36.339913   39454 main.go:141] libmachine: (multinode-624105-m02) DBG | domain multinode-624105-m02 has defined MAC address 52:54:00:8c:c0:c3 in network mk-multinode-624105
	I0924 19:10:36.340268   39454 main.go:141] libmachine: (multinode-624105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:c0:c3", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:09:01 +0000 UTC Type:0 Mac:52:54:00:8c:c0:c3 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-624105-m02 Clientid:01:52:54:00:8c:c0:c3}
	I0924 19:10:36.340644   39454 main.go:141] libmachine: (multinode-624105-m02) DBG | domain multinode-624105-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:8c:c0:c3 in network mk-multinode-624105
	I0924 19:10:36.341459   39454 host.go:66] Checking if "multinode-624105-m02" exists ...
	I0924 19:10:36.342319   39454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:10:36.342360   39454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:10:36.358268   39454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I0924 19:10:36.358716   39454 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:10:36.359182   39454 main.go:141] libmachine: Using API Version  1
	I0924 19:10:36.359202   39454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:10:36.359490   39454 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:10:36.359676   39454 main.go:141] libmachine: (multinode-624105-m02) Calling .DriverName
	I0924 19:10:36.359848   39454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 19:10:36.359869   39454 main.go:141] libmachine: (multinode-624105-m02) Calling .GetSSHHostname
	I0924 19:10:36.362690   39454 main.go:141] libmachine: (multinode-624105-m02) DBG | domain multinode-624105-m02 has defined MAC address 52:54:00:8c:c0:c3 in network mk-multinode-624105
	I0924 19:10:36.363177   39454 main.go:141] libmachine: (multinode-624105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:c0:c3", ip: ""} in network mk-multinode-624105: {Iface:virbr1 ExpiryTime:2024-09-24 20:09:01 +0000 UTC Type:0 Mac:52:54:00:8c:c0:c3 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-624105-m02 Clientid:01:52:54:00:8c:c0:c3}
	I0924 19:10:36.363208   39454 main.go:141] libmachine: (multinode-624105-m02) DBG | domain multinode-624105-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:8c:c0:c3 in network mk-multinode-624105
	I0924 19:10:36.363369   39454 main.go:141] libmachine: (multinode-624105-m02) Calling .GetSSHPort
	I0924 19:10:36.363528   39454 main.go:141] libmachine: (multinode-624105-m02) Calling .GetSSHKeyPath
	I0924 19:10:36.363678   39454 main.go:141] libmachine: (multinode-624105-m02) Calling .GetSSHUsername
	I0924 19:10:36.363826   39454 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19700-3751/.minikube/machines/multinode-624105-m02/id_rsa Username:docker}
	I0924 19:10:36.445331   39454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 19:10:36.458260   39454 status.go:176] multinode-624105-m02 status: &{Name:multinode-624105-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0924 19:10:36.458293   39454 status.go:174] checking status of multinode-624105-m03 ...
	I0924 19:10:36.458595   39454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 19:10:36.458641   39454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 19:10:36.473769   39454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34893
	I0924 19:10:36.474158   39454 main.go:141] libmachine: () Calling .GetVersion
	I0924 19:10:36.474522   39454 main.go:141] libmachine: Using API Version  1
	I0924 19:10:36.474539   39454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 19:10:36.474878   39454 main.go:141] libmachine: () Calling .GetMachineName
	I0924 19:10:36.475027   39454 main.go:141] libmachine: (multinode-624105-m03) Calling .GetState
	I0924 19:10:36.476410   39454 status.go:364] multinode-624105-m03 host status = "Stopped" (err=<nil>)
	I0924 19:10:36.476421   39454 status.go:377] host is not running, skipping remaining checks
	I0924 19:10:36.476426   39454 status.go:176] multinode-624105-m03 status: &{Name:multinode-624105-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-624105 node start m03 -v=7 --alsologtostderr: (36.587203655s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-624105 node delete m03: (1.42119062s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (173.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-624105 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0924 19:19:49.794745   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-624105 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m52.581491206s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-624105 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (173.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-624105
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-624105-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-624105-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (57.565583ms)

                                                
                                                
-- stdout --
	* [multinode-624105-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-624105-m02' is duplicated with machine name 'multinode-624105-m02' in profile 'multinode-624105'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-624105-m03 --driver=kvm2  --container-runtime=crio
E0924 19:22:24.267054   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-624105-m03 --driver=kvm2  --container-runtime=crio: (40.144303741s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-624105
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-624105: exit status 80 (200.325957ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-624105 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-624105-m03 already exists in multinode-624105-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-624105-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.40s)

                                                
                                    
x
+
TestScheduledStopUnix (110.71s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-929840 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-929840 --memory=2048 --driver=kvm2  --container-runtime=crio: (39.151512094s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-929840 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-929840 -n scheduled-stop-929840
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-929840 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0924 19:25:59.340484   10949 retry.go:31] will retry after 88.797µs: open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/scheduled-stop-929840/pid: no such file or directory
I0924 19:25:59.341650   10949 retry.go:31] will retry after 77.737µs: open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/scheduled-stop-929840/pid: no such file or directory
I0924 19:25:59.342817   10949 retry.go:31] will retry after 229.919µs: open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/scheduled-stop-929840/pid: no such file or directory
I0924 19:25:59.343963   10949 retry.go:31] will retry after 204.613µs: open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/scheduled-stop-929840/pid: no such file or directory
I0924 19:25:59.345067   10949 retry.go:31] will retry after 492.07µs: open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/scheduled-stop-929840/pid: no such file or directory
I0924 19:25:59.346213   10949 retry.go:31] will retry after 388.703µs: open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/scheduled-stop-929840/pid: no such file or directory
I0924 19:25:59.347345   10949 retry.go:31] will retry after 1.248631ms: open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/scheduled-stop-929840/pid: no such file or directory
I0924 19:25:59.349515   10949 retry.go:31] will retry after 1.68838ms: open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/scheduled-stop-929840/pid: no such file or directory
I0924 19:25:59.351741   10949 retry.go:31] will retry after 1.877007ms: open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/scheduled-stop-929840/pid: no such file or directory
I0924 19:25:59.353950   10949 retry.go:31] will retry after 4.964325ms: open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/scheduled-stop-929840/pid: no such file or directory
I0924 19:25:59.359141   10949 retry.go:31] will retry after 7.60282ms: open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/scheduled-stop-929840/pid: no such file or directory
I0924 19:25:59.367363   10949 retry.go:31] will retry after 9.186153ms: open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/scheduled-stop-929840/pid: no such file or directory
I0924 19:25:59.377623   10949 retry.go:31] will retry after 13.563694ms: open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/scheduled-stop-929840/pid: no such file or directory
I0924 19:25:59.391899   10949 retry.go:31] will retry after 26.919043ms: open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/scheduled-stop-929840/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-929840 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-929840 -n scheduled-stop-929840
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-929840
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-929840 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0924 19:27:07.337035   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-929840
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-929840: exit status 7 (64.512388ms)

                                                
                                                
-- stdout --
	scheduled-stop-929840
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-929840 -n scheduled-stop-929840
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-929840 -n scheduled-stop-929840: exit status 7 (64.377725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-929840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-929840
--- PASS: TestScheduledStopUnix (110.71s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (188.99s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3760743615 start -p running-upgrade-545449 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0924 19:27:24.267375   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3760743615 start -p running-upgrade-545449 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m1.729461138s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-545449 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-545449 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.303913384s)
helpers_test.go:175: Cleaning up "running-upgrade-545449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-545449
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-545449: (1.171384468s)
--- PASS: TestRunningBinaryUpgrade (188.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-466611 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-466611 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (83.295086ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-466611] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (91.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-466611 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-466611 --driver=kvm2  --container-runtime=crio: (1m30.808491218s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-466611 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (91.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (117.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2190937372 start -p stopped-upgrade-076487 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2190937372 start -p stopped-upgrade-076487 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m2.544478481s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2190937372 -p stopped-upgrade-076487 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2190937372 -p stopped-upgrade-076487 stop: (11.818795528s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-076487 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0924 19:29:49.790482   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-076487 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.522770787s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (117.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (34.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-466611 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-466611 --no-kubernetes --driver=kvm2  --container-runtime=crio: (33.702883548s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-466611 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-466611 status -o json: exit status 2 (256.350783ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-466611","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-466611
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (34.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-466611 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-466611 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.549947944s)
--- PASS: TestNoKubernetes/serial/Start (28.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-466611 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-466611 "sudo systemctl is-active --quiet service kubelet": exit status 1 (183.953258ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (27.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.927591724s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (12.754937876s)
--- PASS: TestNoKubernetes/serial/ProfileList (27.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-466611
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-466611: (2.72791072s)
--- PASS: TestNoKubernetes/serial/Stop (2.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-466611 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-466611 --driver=kvm2  --container-runtime=crio: (21.656396484s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-076487
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-038637 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-038637 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (999.474766ms)

                                                
                                                
-- stdout --
	* [false-038637] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 19:30:35.986142   50481 out.go:345] Setting OutFile to fd 1 ...
	I0924 19:30:35.986395   50481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:30:35.986423   50481 out.go:358] Setting ErrFile to fd 2...
	I0924 19:30:35.986440   50481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 19:30:35.986929   50481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3751/.minikube/bin
	I0924 19:30:35.987933   50481 out.go:352] Setting JSON to false
	I0924 19:30:35.988860   50481 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4387,"bootTime":1727201849,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 19:30:35.988972   50481 start.go:139] virtualization: kvm guest
	I0924 19:30:35.991272   50481 out.go:177] * [false-038637] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 19:30:35.992777   50481 notify.go:220] Checking for updates...
	I0924 19:30:35.992815   50481 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 19:30:35.994212   50481 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 19:30:35.995630   50481 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3751/kubeconfig
	I0924 19:30:35.996923   50481 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3751/.minikube
	I0924 19:30:35.998301   50481 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 19:30:35.999667   50481 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 19:30:36.001530   50481 config.go:182] Loaded profile config "NoKubernetes-466611": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0924 19:30:36.001622   50481 config.go:182] Loaded profile config "force-systemd-env-940861": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 19:30:36.001697   50481 config.go:182] Loaded profile config "kubernetes-upgrade-629510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 19:30:36.001778   50481 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 19:30:36.936424   50481 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 19:30:36.937645   50481 start.go:297] selected driver: kvm2
	I0924 19:30:36.937658   50481 start.go:901] validating driver "kvm2" against <nil>
	I0924 19:30:36.937669   50481 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 19:30:36.939562   50481 out.go:201] 
	W0924 19:30:36.941009   50481 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0924 19:30:36.942215   50481 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-038637 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-038637

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-038637

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-038637

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-038637

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-038637

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-038637

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-038637

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-038637

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-038637

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-038637

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-038637

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-038637" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-038637" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-038637

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-038637"

                                                
                                                
----------------------- debugLogs end: false-038637 [took: 2.70167945s] --------------------------------
helpers_test.go:175: Cleaning up "false-038637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-038637
--- PASS: TestNetworkPlugins/group/false (3.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-466611 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-466611 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.576034ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestPause/serial/Start (104.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-058963 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-058963 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m44.846653193s)
--- PASS: TestPause/serial/Start (104.85s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.45s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-058963 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-058963 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.430497644s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (58.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-038637 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-038637 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (58.200450629s)
--- PASS: TestNetworkPlugins/group/auto/Start (58.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-038637 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-038637 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m22.159157714s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.16s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-058963 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-058963 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-058963 --output=json --layout=cluster: exit status 2 (264.358451ms)

                                                
                                                
-- stdout --
	{"Name":"pause-058963","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-058963","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-058963 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.75s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-058963 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.86s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-058963 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.09s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.093587196s)
--- PASS: TestPause/serial/VerifyDeletedResources (4.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (80.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-038637 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-038637 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m20.347244752s)
--- PASS: TestNetworkPlugins/group/flannel/Start (80.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-038637 "pgrep -a kubelet"
I0924 19:34:06.861728   10949 config.go:182] Loaded profile config "auto-038637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-038637 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xhqfr" [638e0613-0c7d-40ef-8a60-2f92faf90e84] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xhqfr" [638e0613-0c7d-40ef-8a60-2f92faf90e84] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004052059s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-038637 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-038637 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-038637 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (91.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-038637 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-038637 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m31.004878227s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (91.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-n6r8z" [96c0bde0-9f2f-4c16-917c-b5dadf0777b5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003870062s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-038637 "pgrep -a kubelet"
I0924 19:34:44.459245   10949 config.go:182] Loaded profile config "kindnet-038637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-038637 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jpwfm" [3c36c1c8-f70d-4a66-be7f-5c51bd6a36fb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jpwfm" [3c36c1c8-f70d-4a66-be7f-5c51bd6a36fb] Running
E0924 19:34:49.790301   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005041613s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-038637 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-038637 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-038637 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (91.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-038637 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-038637 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m31.142095334s)
--- PASS: TestNetworkPlugins/group/bridge/Start (91.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-fc42w" [d141faf5-0bb7-4e1d-89d9-2b05c9d36177] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004400074s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-038637 "pgrep -a kubelet"
I0924 19:35:19.778293   10949 config.go:182] Loaded profile config "flannel-038637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-038637 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pxvm7" [41e54edf-9d12-458e-a7ac-6b276717ea6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pxvm7" [41e54edf-9d12-458e-a7ac-6b276717ea6f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003675922s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-038637 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-038637 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-038637 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (75.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-038637 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-038637 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m15.467230521s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (75.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (98.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-038637 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-038637 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m38.729492878s)
--- PASS: TestNetworkPlugins/group/calico/Start (98.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-038637 "pgrep -a kubelet"
I0924 19:36:08.018707   10949 config.go:182] Loaded profile config "enable-default-cni-038637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-038637 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-77nf8" [69b4a992-3cac-4edd-9725-7df058c102c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-77nf8" [69b4a992-3cac-4edd-9725-7df058c102c5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004507277s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-038637 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-038637 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-038637 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-038637 "pgrep -a kubelet"
I0924 19:36:44.436049   10949 config.go:182] Loaded profile config "bridge-038637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-038637 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-657h9" [903369c4-0b2d-4b6e-b2f8-f438ec374395] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-657h9" [903369c4-0b2d-4b6e-b2f8-f438ec374395] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003978127s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-038637 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-038637 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-038637 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-038637 "pgrep -a kubelet"
I0924 19:36:57.011535   10949 config.go:182] Loaded profile config "custom-flannel-038637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-038637 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ndpph" [5260fb48-7c7f-44d3-b218-7e43f377ad2d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ndpph" [5260fb48-7c7f-44d3-b218-7e43f377ad2d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.006250823s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-038637 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-038637 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-038637 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (102.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-965745 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-965745 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m42.732073638s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (102.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (74.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-311319 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0924 19:37:24.267235   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-311319 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m14.548875202s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (74.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vpdl8" [82f2810b-5029-488b-b145-6610f722c369] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004070588s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-038637 "pgrep -a kubelet"
I0924 19:37:33.784548   10949 config.go:182] Loaded profile config "calico-038637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-038637 replace --force -f testdata/netcat-deployment.yaml
I0924 19:37:33.978026   10949 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dfkjn" [33458ad4-fbfa-4e48-b762-e071893b0f7d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dfkjn" [33458ad4-fbfa-4e48-b762-e071893b0f7d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.010562549s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-038637 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-038637 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-038637 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (94.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-093771 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-093771 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m34.577129s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (94.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-311319 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1064952b-782d-4e91-8441-eb7a37c81d08] Pending
helpers_test.go:344: "busybox" [1064952b-782d-4e91-8441-eb7a37c81d08] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1064952b-782d-4e91-8441-eb7a37c81d08] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004444528s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-311319 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-311319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-311319 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-965745 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [97f162b3-8eb4-4b04-af2b-978373632a7a] Pending
helpers_test.go:344: "busybox" [97f162b3-8eb4-4b04-af2b-978373632a7a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [97f162b3-8eb4-4b04-af2b-978373632a7a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004099721s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-965745 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-965745 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-965745 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-093771 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [04ce0a6f-24cb-416c-a396-155a1c254918] Pending
helpers_test.go:344: "busybox" [04ce0a6f-24cb-416c-a396-155a1c254918] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0924 19:39:38.249083   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:38.255437   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:38.266871   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:38.288235   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:38.329632   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:38.411097   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:38.572655   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:38.894046   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:39.536161   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [04ce0a6f-24cb-416c-a396-155a1c254918] Running
E0924 19:39:40.818109   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:39:43.380042   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003953751s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-093771 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-093771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-093771 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (661.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-311319 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-311319 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (11m1.481376157s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-311319 -n embed-certs-311319
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (661.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (568.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-965745 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0924 19:41:44.662381   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:44.668712   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:44.680052   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:44.701390   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:44.742737   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:44.824201   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:44.985774   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:45.308042   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:45.949731   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:41:47.231187   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-965745 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m28.26348438s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965745 -n no-preload-965745
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (568.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (574.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-093771 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0924 19:42:22.108847   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/kindnet-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:24.266663   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/addons-218885/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:25.637898   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:27.584078   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:27.590443   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:27.601832   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:27.623181   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:27.664587   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:27.746084   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:27.907676   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:28.229497   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:28.871818   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:30.153637   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:30.167039   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/enable-default-cni-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:32.715776   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:37.837116   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/calico-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 19:42:38.199843   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-093771 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m33.820658461s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-093771 -n default-k8s-diff-port-093771
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (574.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-510301 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-510301 --alsologtostderr -v=3: (1.275850494s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510301 -n old-k8s-version-510301
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510301 -n old-k8s-version-510301: exit status 7 (63.738982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-510301 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-813973 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0924 20:06:44.662926   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/bridge-038637/client.crt: no such file or directory" logger="UnhandledError"
E0924 20:06:57.223532   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/custom-flannel-038637/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-813973 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (49.620269538s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-813973 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-813973 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-813973 --alsologtostderr -v=3: (7.30489575s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-813973 -n newest-cni-813973
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-813973 -n newest-cni-813973: exit status 7 (74.577832ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-813973 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-813973 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0924 20:07:52.863726   10949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19700-3751/.minikube/profiles/functional-884668/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-813973 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (35.280122963s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-813973 -n newest-cni-813973
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-813973 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-813973 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-813973 -n newest-cni-813973
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-813973 -n newest-cni-813973: exit status 2 (232.985396ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-813973 -n newest-cni-813973
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-813973 -n newest-cni-813973: exit status 2 (224.408289ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-813973 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-813973 -n newest-cni-813973
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-813973 -n newest-cni-813973
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.27s)

                                                
                                    

Test skip (32/318)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-038637 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-038637

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-038637

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-038637

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-038637

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-038637

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-038637

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-038637

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-038637

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-038637

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-038637

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-038637

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-038637" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-038637" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-038637

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-038637"

                                                
                                                
----------------------- debugLogs end: kubenet-038637 [took: 2.590718554s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-038637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-038637
--- SKIP: TestNetworkPlugins/group/kubenet (2.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
I0924 19:30:41.039193   10949 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0924 19:30:41.039264   10949 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0924 19:30:41.071132   10949 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0924 19:30:41.071166   10949 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0924 19:30:41.071242   10949 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0924 19:30:41.071272   10949 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate811114061/002/docker-machine-driver-kvm2
I0924 19:30:41.421853   10949 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate811114061/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466e640 0x466e640 0x466e640 0x466e640 0x466e640 0x466e640 0x466e640] Decompressors:map[bz2:0xc000717c60 gz:0xc000717c68 tar:0xc000717be0 tar.bz2:0xc000717bf0 tar.gz:0xc000717c30 tar.xz:0xc000717c40 tar.zst:0xc000717c50 tbz2:0xc000717bf0 tgz:0xc000717c30 txz:0xc000717c40 tzst:0xc000717c50 xz:0xc000717c80 zip:0xc000717c90 zst:0xc000717c88] Getters:map[file:0xc001c83650 http:0xc00003e1e0 https:0xc00003e230] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0924 19:30:41.421901   10949 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate811114061/002/docker-machine-driver-kvm2
panic.go:629: 
----------------------- debugLogs start: cilium-038637 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-038637

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-038637

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-038637

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-038637

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-038637

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-038637

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-038637

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-038637

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-038637

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-038637

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-038637

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-038637" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-038637

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-038637

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-038637

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-038637

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-038637" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-038637" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-038637

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-038637" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-038637"

                                                
                                                
----------------------- debugLogs end: cilium-038637 [took: 2.965847709s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-038637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-038637
--- SKIP: TestNetworkPlugins/group/cilium (3.10s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-119609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-119609
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard